Computers and Tech

Unreal Tournament’s End of Active Development Is A Symptom

So, the news broke today that the reboot of Unreal Tournament was no longer in active development. It’s not much of a surprise: not only has there not been an update to the title in nearly a year, there hasn’t been an update to their development blog in over a year, either.

Now, in addition to being a general fan of the title, the business model was a favorite of mine, too: the game was free with no in-app purchases or lootboxes. A store where users could sell skins and mods and character models was available with Epic Games skimming off the top, and the Unreal Engine 4 powering it would be available for developers of other games to use, with royalties paid on the engine after a certain threshold.

However, Epic Games struck gold with Fortnite. If you haven’t at least heard of it by now, you probably haven’t spoken to an adolescent since the Obama administration. It’s so popular, Sony reversed their stance on cross-platform play for the first time ever in their Playstation ecosystem. Epic released the Android app on its own website, rather than in the Google Play store…and got 15 million downloads in three weeks; by contrast, I’m having a rough time trying to come up with another app not in the Play Store that has broke the first million. It’s that big. The fact that Epic has been focusing on printing money with Fortnite rather than developing Unreal Tournament is not just common sense, it’s almost absurd to try and justify the inverse.

While the unbridled success of Fortnite is undoubtedly a major reason why UT development has stalled, I submit that it’s far from the only reason. After all, Epic Games has been in the business since the 1990s. They are fully aware that empires come and empires go. Minecraft, Angry Birds, Halo, and Doom before it all testify to this fact. I think there’s a deeper reason why.

Unreal Tournament hails from a completely different era in gaming. UT2004 shipped with a level editor and dedicated server software. For some, a part of the fun was making one’s own maps, character models, and even total conversion mods, frequently distributing them for others to enjoy. While quality levels varied significantly, communities formed around map and mod development. Even if you weren’t a developer, one of the major draws to the game was that downloadable content was free, and created by the players.

Fast forward to 2018, and that’s not at all how things work anymore. I can’t recall the last major game release that allowed players to self-host their servers or add their own created content, let alone ship with the tools to do so. New maps and character models are almost exclusively paid add-ons now, and few players remember it any other way. Even those who made their own content for UT in its heyday are likely either employed in some form of design or development, or have moved on to other things.

Those who are still doing this sort of development have a plethora of options, from the open source Alien Arena and FreeDoom to GoldenEye Source and straight up developing their own indie games to release on Steam. With lots of options courting a dwindling number of skilled individuals, Epic counting on ‘bringing the band back together’ was going to be an uphill battle. Moreover, even the sheer player stats probably weren’t great; Quake Champions, Toxikk, and other arena shooters are available as great options for players who aren’t perfectly happy playing UT2004, a game whose mechanics and balance are so well done that the graphics which reflect their era can be readily overlooked.

I don’t think this is really the end of UT development, though. Like I said, empires come and empires go, and while it makes sense for Epic to cash in on Fortnite while it’s a household name, by 2021 (if that long), there will be another game to take the crown. While Fortnite will still probably be popular enough to handle the payroll, the focus will likely shift back to developing and licensing Unreal Engine 4. With hundreds of games utilizing the engine including some heavy hitters like Mortal Kombat X, Spec Ops: The Line, Rocket League, Infinity Blade, the Batman: Arkham series, and of course the Mass Effect trilogy, licensing the engine is far and away the best source of steady income for Epic.

And when game developers are looking around for the engine upon which their next title should be based, there is no better way for Epic to showcase the Unreal Engine to have its namesake available for free.

Call of Duty Black Ops 4 – One More Thing With Which I’m Incompatible

So, I took a little time to try my hand at Call of Duty, Black Ops IIII. And I am left to assume that it’s just one of those things that I simply have a fundamental incompatibility with…either that, or it’s clear that Activision ultimately has no idea how to learn some of the lessons from the games that came before this one.

Now, I’m sure I’m not entirely qualified to speak on the game authoritatively; I own Modern Warfare and the original Black Ops, games whose single player campaigns I’ve started twice and never completed.

I knew going into it that the single player mode was essentially just a tutorial; there were no shortage of pieces written about the fact that the game had no real single player campaign at all. I was also well aware that the game had loot boxes and in-app purchases as integral components of its design.

Jim Sterling has had a number of videos on the topic of lootboxes and microtransactions which I generally agree with, so I won’t go into detail on that front. The bigger issue I have is with the lack of a single player campaign is that adding one is trivial. The first Black Ops game had a story. It was a fairly outlandish one, but CoD has never really had its popularity due to its storytelling. Not having a story-based single player campaign is regrettable, but Unreal Tournament 2004 solved that problem over a decade ago with a simple progression ladder, where multiplayer matches vs. bots were won to advance to the next challenger, and so forth. Its use of the exact same maps and character models as the multiplayer game meant that development time was minimal, it provided players desiring a single player experience a means of doing so, and everyone had a way to get good enough to play multiplayer.

Now, Ben ‘Yahtzee’ Croshaw describes Destiny 2 as a game where the sum total of the objectives is “go to the place and shoot the lads”, with a paper thin story regarding *why* you’re going to the place and shooting the lads. Some readers might say, “but, don’t you like Unreal Tournament, where there’s not only a lack of reason for shooting the lads, but since the lads you’re shooting are in the same arena as you, you’re not even getting the satisfaction of going to the place to shoot them?” Well, yes…but I think there are a few reasons why I hold UT to a different standard than CoD.
First, UT doesn’t have the pretense of realism. For example, the earlier CoD titles that put the franchise on the map had their weapons closely modeled after real firearms, albeit not always military issue. Newer installments have moved away from that attention to detail, but it was a part of the early design. Early CoD games were set in actual historical theaters of war, the first two Modern Warfare installments take place in areas of conflict that are at least somewhat believable, and while Black Ops went for the ridiculous in the back half of the game, it at least began its setting in a historical conflict where one really could see a Black Ops mission taking place. Part of the fun was the fact that players could participate in historical events, and while for many it was likely an excuse to go to the place and shoot lads in uniforms laden with swastikas, there were literally hundreds of first person shooters released before Call of Duty, including iconic titles like Doom and Halo.
Unreal Tournament never did any of this any was always completely fictitious and fantastical in every way, from its remote planets to its impossibly proportioned character models to its brigher colors to its weapon loadout clearly focused on game mechanics, the title was always intended to be taken at face value. Asking why we’re capturing a flag in UT is like asking why we’re stacking boxes in Tetris or eating dots in Pac-Man.

One may well argue that CoD has been moving away from realism for some time, and the lack of a single player campaign simply reflects that sort of shift in focus, with reasoning anywhere from the pragmatic “players were spending 99% of their time in multiplayer anyway”, to the cynnical “A single player campaign, even a simple progression ladder, would conflict with Activision’s primary objective: sell lootboxes/DLC maps/live services”. Moreover, there are probably some who would say that my relative inexperience in playing CoD is a part of the problem. That too is a distinct possibility. Raycevick, who has played them, discusses this in greater detail. However, I submit that if Black Ops IIII is the natural progression of the title, it starts looking more and more like an arena shooter. Making this transition would put it into a subgenre where the things that made CoD stand out in its earlier iterations start to become a liability…especially when this installment has a $60 sticker price – a selling price so high, I could not find an arena shooter for even half of it. I could, however, find several of them for free – from the open source OpenArena and Alien Swarm to Goldeneye Source, Quake Champions, Unreal Tournament, and the 800-pound gorilla: Fortnite.

Creating both an internal and a guest Wi-Fi network on a Sonicwall

I have a hate-hate relationship with Sonicwall. They’re annoying when they don’t work. I recently had to conjure up a procedure about how to configure a new Wi-Fi enabled Sonicwall with two different Wi-Fi networks, one for internal use, and the other isolated for guests. Here is that tutorial. It assumes an out-of-the-box Sonicwall config, starting with the initial setup wizard…

 

1. When going through the initial setup wizard, do NOT specify any Wireless settings.

2. For the internal wireless, use the Wi-Fi wizard. Set its IP Assignment to “Layer 2 Bridged Mode”; bridge to X0. Give it a useful SSID and be sure to use the WPA/WPA2 mode and give it a password. Do NOT create an additional virtual AP in this wizard.

3. Go to Zones, then Add a new zone. Set its security type to Wireless. Defaults are fine; if you’re being fancy, the Guest Services page allows for a captive portal to be set.

4. Go to Interfaces, then Add Interface, and choose Virtual Interface. Assign it to the Zone you just made, and give it a VLAN tag (10 is what I tend to use). Make its parent interface W0, and set its subnet mask to something bigger than a Class C (255.255.252.0 is what I tend to use). Click OK, and confirm the notice saying the Sonicwall can’t be configured from the VLAN.

5. Go to Network->DHCP Server. Click ‘Add Dynamic’. Check the ‘Interface Pre-Populate’, and choose the VLAN you just made. Go to the DNS tab, and add some public DNS servers, especially if you’re in a network with a domain controller.

6. Go to Wireless, then Virtual Access Point. Click ‘Add’ under the Virtual Access Point section. Give it a name and an SSID, and set the VLAN ID to the one you made earlier. Under Advanced’ settings, set the Authentication type to WPA2-PSK, the cypher type to AES, and the ‘Maximum Clients’ to 128. Add a passphrase, then click OK. Also, you might want to edit the original SSID to allow 128 wireless clients as well, instead of the default 16.

7. Still in the Wireless->Virtual Access Point area, Edit the “Internal AP Group” in the Virtual Access Point Groups” section. Add the additional SSID you just created to the Internal AP Group. Click OK to exit.

8. Go to the Wireless->Settings area. On the drop-down labeled “Virtual Access Point Group” on the bottom, select the Internal AP Group option. Click Accept on the top.
(note: if you get an error saying “Status: Error: Too small 802.11 Beacon Interval for Virtual Access Point”, go to Wireless->Advanced, change the Beacon Interval to 500, and try this step again).

It will take about one minute for all SSIDs to be visible to devices…but you will have properly configured everything when you are done.

The Update Virus

We live in a technological world where ‘updates’ are basically an expectation of virtually anything connected to the internet, and I’m uncertain that it’s the best thing for a number of reasons.

I was discussing this with a friend of mine last week. We were discussing how previous generations of video games didn’t have an update mechanism. If the game had bugs on the day when the cartridges were programmed or CDs were pressed, players got the glitches, and the game’s reputation would reflect them. This gave incentive for developers (both for games and other forms of software) to do a good amount of quality assurance testing prior to release. The ship-then-patch model has removed lots of QA testing, letting the initial players’ complaints and Youtube uploads from paying customers fill the gap. In other words, it has created an incentive for companies to use paying customers, rather than employees, as QA staff.

 Mobile apps have gotten just as bad. With pervasive LTE, apps have less incentive to optimize their code, leading to situations like the Chipotle App requiring nearly 90MB of space. This could be done in maybe 10MB of code (arguably less), but instead there’s a whole lot of unoptimized code which reduces available storage for users, increases the likelihood of bugs, and the interminable cycle of storage management for folks with 16GB of storage – a near-infinite amount just 20 years ago. Moreover, update logs used to describe new features, optimizations, and specific bug fixes. The majority of change logs have devolved into saying little more than “bug fixes and improvements”. Is the bug I’m experiencing on that got fixed? The fact that the Netflix app will no longer run on a rooted phone isn’t something that made it into a change log. Yet, with basically no information, many people allow desktop applications and mobile apps to update themselves with little accountability.

The fact that both Windows and many Linux distributions perform major updates at a semi-annual cadence is itself telling. The PC market has been fragmented for most of its existence. Even after it became the Windows/Mac battle (RIP Commodore, Amiga, and OS/2), there was a span when a person’s computer could be running Windows 98SE, 2000, NT, ME, or XP. Yet somehow, in a world prior to lots of things just being a website, and where users could have dial-up or broadband, and a 233MHz Intel Pentium II (introduced in May 1997) or a 1.2GHz P4 (introduced in January 2002), 64MB of RAM or 320MB of RAM, a hardware GPU or not, 640×480 screen size through 1280×1024. In a far more fragmented computing landscape, it was possible for software developers to exist and make money. There was little outcry from end users expecting “timely updates” from Dell or IBM or Microsoft. The updates that did come out were primarily bug fixes and security patches. There weren’t expectations upon software developers or hardware OEMs to list “timely updates” as a feature they were aiming to achieve.

So, why do I call it the ‘update virus’? Because the major OS vendors (Apple, Google, Microsoft) are all getting to the point where constant updates are an expectation, they’re not just ‘security updates’ but ‘feature upgrades‘. Many end users and fellow technicians I interact with have a condescending mindset towards those who choose otherwise. At first glance, I can’t blame people for being okay with new features for free, but the concern I’ve got is how monolithic it is. It is not possible to get only security updates on any of the three major platforms. UI changes are part and parcel with that. All three operating systems have gotten to the point where there is no “go away” button; Android’s OS release options are “update now” or “remind me in four hours”. Really? Six nags a day? No user choice in the matter? There was a massive outcry when Lollipop came out; the massive UI changes were difficult to deal with for many. There were more than a few reports of measurably decreased battery life. I recently bought a tablet for my mother where updates made it impossible to access the Play Store; my only option was to root the tablet so I could disable the update, because the stock firmware ran fine. Is this truly an improvement? 

Now, most people will argue “but security! but malware!”, and to an extent, they are right. Expecting users to manually disable SMBv1 for the sake of stopping the infamous Wannacry ransomware from spreading is certainly the epitome of ‘wishful thinking’. By contrast, I recently had a situation where the laptop I use for controlling my lighting rig when I DJ failed to Windows Update immediately before the event, getting it stuck in an unbootable state and making it impossible to use for its intended purpose. On what basis is that behavior not precisely the sort of thing typically associated the very malware from which Windows Updates are purported to protect?

 

Ultimately, I like “good updates”. Whether it is because they fix security holes or because they optimize a feature, I am very much in favor of good updates. I do not like “bad updates” – the ones that break an OS installation, or install at the worst possible time, or massively revamp the UI without a “classic mode”, or similarly prevent my devices from performing their intended function. With no way to determine the good ones from the bad, updating has gone from a best practice to a gamble.

And if I wanted to gamble, I’d go to a casino.

Review: TP-Link EAP Series Access Points

Long story short, I’ve wanted to upgrade the wireless connectivity in my apartment for some time. I’ve also been pretty impressed with TP-Link reinventing itself, from a bottom-of-the-barrel manufacturer typically grouped with Tenda and Trendnet, to creating genuinely solid products at competitive prices and becoming a solid mid-range competitor in the network/connected space. They’re one of the few companies where a packaging refresh seemed like more than just a graphic designer winning over the VP of Marketing, and instead reflected a shift in the products themselves.

I recently got addicted to buying the TP-Link LB100-series smart bulbs, primarily because they were the only ones on the market I could verify didn’t require the creation of an online account, and would function on a LAN even if I blocked the bulbs from getting to the internet using my firewall. Their managed switches have been solidly built and have a much better UI than in years past, and though it wasn’t the greatest performing router ever, the AC750 did a solid job in the versatility field, being either a router, an access point, a range extender or a bridge, depending on what was needed.

So when I saw they were making centrally managed access points at half the cost of even Ubiquiti, I needed to give them a try.

Two days in, and I’m 50/50 on them. Normally, I utilize either Ubiquiti or Ruckus access points. The latter being one of the industry standards along with Aruba and Cisco, but at $400 for their lowest end access points (and requiring licenses, support contracts, and controllers for even modestly-sized rollouts), it’s a bit of a sticker shock if you’re not the sort of venue that doesn’t house a thousand people on a regular basis. In my experience, Ubiquiti offers 80% of the function for 30% of the price, but at higher densities, the differences become more apparent. I was hoping that TP-Link having a number of similar features listed on the box would make them an option for those who want features at consumer prices, or as my friend Charisse puts it, “champagne taste on a beer budget”.

One thing that was notable immediately was that the TP-Link EAP225 offers both standalone functions and centralized management functions. While Ruckus supports this, Ubiquiti requires a controller of some kind to be present. TP-Link’s central management software takes a number of cues from Ubiquiti’s UI, which was helpful. Setting the SSID and password were trivial, and I was happy to see client isolation options and the ability to configure a separate network without VLANs; admittedly I could use the included DC power adapter instead of my unmanaged switch and configure VLANs in Tomato, but that would defeat the purpose of having a PoE switch.

What I don’t like about it, however, is something that is likely to be fixed in the coming months. The software is annoying. According to the software, my AP is “Provisioning”, as it has been for the past three hours, happily functioning as configured. The software doesn’t auto-download firmware; users need to download the files and specify their locations. Furthermore, I had to force-reset my AP after ten minutes as it didn’t come back the way it was supposed to, then re-adopt and re-provision it. Short of bricking the AP, this is basically the worst experience for a firmware update. Ubiquiti and Ruckus both handle these seamlessly, and can even do them on a schedule.

The reason I attempted the firmware update in the first place was because it wasn’t reporting what devices were connected, though it did have a number listed. Now, to their credit, the firmware update did ultimately solve this. Also, the “locate” control for everyone else causes the status LED to rapidly flash, flash a different color, or similar, which is incredibly helpful when trying to label APs in the software. TP-Link, however, just jumps to the ‘Map’ area, which is an utterly pointless function for an AP that hasn’t been placed on a map.

Finally, my apartment isn’t big enough to need two of these, so I have no idea how well the roaming and client distribution works…yet. Also, to their credit, this AP vastly outperforms my old one, a Linksys WRT1200AC running in AP mode with DD-WRT. The Linksys was lucky to get 8MB or 9MB real-world transfer speeds on the other end of the apartment; I just did a transfer at 27MB from the same spot.

All in all, while I consider Ubiquiti to be very close in function to Ruckus at a sizeable discount, TP-Link is about 60% of Ubiquiti for 50% of the price. The good news is that most of their failings are software-based, and are easily rectified; they could easily bump themselves up to 90% of Ubiquiti as the Omada Controller software improves. I won’t be returning this AP anytime soon, but I won’t be recommending that clients eschew their Ubiquiti or Ruckus systems for them just yet, either.

My definitive guide to reclaiming Windows 10

So, some friends on Facebook were discussing the fact that Windows 10 updates are a problem. They take forever, happen at an inopportune time, and coming back from the major releases means that custom file associations are reset, it’s entirely possible for programs to be uninstalled, and I’ve run into no shortage of instances where I’ve had to revert back because an update caused a computer to be unable to restart…not to mention the fact that computers preparing for major updates run unbearably slow as they download and stage things. Microsoft thinks this is a good idea. They are the only ones.

Let’s make a few things clear here: my steps to resolve these issues are my personal preferences. If you do this, you will prevent your system from getting updates. Software that “works on Windows 10” might assume you’re on the most recent release, rather than whatever release you were on when you did these things. Reversing it all is a pain, and still might not work – just assume it’s a one-way trip. No warranty is provided if you mess things up. With that being said, let’s begin…

  1. Update Windows as much as you can up until this point. You probably don’t want to be running on two-year-old install media. This should also include the SMBv1 fix; no sense in keeping yourself open to WannaCry.
  2. Take care of the major stuff in one shot…
    • Download W10 Privacy: https://www.winprivacy.de/deutsch-start/download/. Extract it, and run it as an admin.
    • Download this file: W10-CustomConfig. Extract it somewhere, too.
    • Click Configuration, then Load. Use ‘Choose Path’ to navigate to where you extracted the ZIP file, and import the INI file.
    • Go through the different tabs and make sure there’s nothing you’d like to change. This is the config I use on my computer, but your needs may be different, so give it a quick once-over.
    • Click ‘Set Changed Settings’, then confirm it. It’ll take a few minutes to finish everything. Reboot when you’re done.
    • After rebooting, from a ‘run’ prompt or a command line (or the start menu search), type “services.msc”. When it loads, scroll down to “Windows Update”. Double click it, set it to ‘disabled’ (if it isn’t already), then click ‘stop’ (if it isn’t already).
  3. Completely napalm Windows Update…
    • Go to c:\windows\system32. scroll down to ‘wuauclt.exe’.
    • Right-click, then click ‘Properties’. Go to the ‘security’ tab.  Click ‘Advanced’.
    • On the top where it lists the owner as “TrustedInstaller”, click ‘Change’. Type your user account name, then click OK. Click OK again to close out the “Advanced” window, then click ‘Advanced’ again to re-open it with the ownership changes.
    • Click ‘Change Permissions’, approving the UAC prompt if needed.
    • Click ‘TrustedInstaller’, then ‘Edit’. Uncheck everything except ‘Read’ (Windows Defender will replace it if you delete it or deny it ‘Read’ permissions). Do the same for the “System”, “Users”, “ALL APPLICATION PACKAGES”, and “ALL RESTRICTED APPLICATION PACKAGES” accounts as well. For added paranoia, remove everyone except “System” and “TrustedInstaller” so that it can’t run in a user context. Click OK, then OK again to commit the changes.
  4. Tell Cortana where to shove it. A word of caution though, if you use Outlook, you won’t be able to do search-as-you-type. You will also wait forever for file system searches to be performed, though if you’re not using Everything to do your file system searches instantly instead of battling the green bar, you don’t know what your missing. Anyway, without further ado…
    • Go to C:\Windows\SystemApps. You’ll see a folder called Microsoft.Windows.Cortana_[something].
    • Go back up to the last step about applying read-only permissions to “System” and “TrustedInstaller”, then do those exact same steps on this folder.
  5. Start Menu Fix.
    • Install Classic Shell. With Windows Update neutered, MS won’t be messing with it, so the final release will be just fine and very reliable. If you’re skittish about that, the five bucks Stardock wants for Start10 is perfectly reasonable. If you want to use the stock Windows 10 start menu, you’re weird, but you won’t see random apps starting up. At the very least, install Classic Shell provisionally, as it gives ‘uninstall’ options for a number of Win10 apps that Windows won’t allow, leaving you with just the core.
  6. Anti-Telemetry.
    • Most of this was addressed with W10 Privacy, as it adds a whole lot of entries to your “hosts” file to minimize the output. However, I strongly recommend using a third party antivirus since Windows Defender will clear the hosts file entries when it runs scans. ESET NOD32 is my personal favorite, and it’s frequently on sale on Newegg.
    • Run these commands from an elevated command prompt:
      sc delete DiagTrack
      sc delete dmwappushservice
      echo "" > C:\ProgramData\Microsoft\Diagnosis\ETLLogs\AutoLogger\AutoLogger-Diagtrack-Listener.etl
      reg add "HKLM\SOFTWARE\Policies\Microsoft\Windows\DataCollection" /v AllowTelemetry /t REG_DWORD /d 0 /f
    • Copy/paste this list of host file entries into yours. For added paranoia, if your firewall runs Tomato, you can use the integrated Adblocker to utilize this hosts file at the router level, though obviously this only protects you while you’re connected to that router.

 

Okay, those are the majors, and the required stuff for me when I first wipe and reload Windows 10 on my laptop. Understand though, that I wipe and reload Windows on my laptop approximately once every 14 months, meaning I’m not woefully out of date. 

Best of luck, everyone!

Ode to 8150, and the consequences of a recycling culture

For those who aren’t quite sure what I’m talking about, This is an HP Laserjet 8150. The first 8000 series printers were being sold in 1998; the last rolled off the assembly lines in 2002. The manual says they weigh 112lbs, and while I don’t quite think they’re that heavy, they’re most definitely the sort of thing worth opting for “local pickup” if buying on eBay. The Energy Star qualifications must have been much different back then, because its 135W “idle” power draw is only dwarfed by its 650W operational power requirements. By today’s standards, they are by no means the gold standard in power efficiency. They don’t connect via USB, they stand nearly three feet tall with a single paper tray, and their toner cartridges cost $150 a pop and are getting harder and harder to find. I’ve been hard pressed to find out how much they cost when they were released, but with refurbished units selling for $500 or more, I’d speculate that a $1,500 would be a safe bet; it’s entirely possible to walk out of a Staples with half a dozen laser printers for that price.

And yet, I still consider them, and their 4000 series cousins, to be amongst the best printers ever made.

Their JetDirect cards, though requiring an insecure version of Java to interact with, still connect with modern networks. They speak PostScript and PCL, meaning that iPads and other mobile devices can print to them natively, with no configuration, even though they are a decade their successor. Though the paper jams I’ve had in these printers have been difficult to address, they are incredibly few and far between. Those $150 toners? They’re rated for 17,000 pages, making it likely that the paper will cost more than the toner.

The real reason that these are the best printers I believe were ever made, however, is because I’ve yet to see one truly broken. Sure, they need a new fuser every 300-500,000 pages, but even those last longer than the 80,000-100,000 page ratings on fusers for contemporary printers. Beyond that, along with the toner and rollers, I’ve never once seen one of these printers truly die. I can certainly understand a concern regarding survivorship bias; indeed it is a fair argument to make. However, every one of these I have disposed of, I have only disposed of due to their being replaced with newer printers, invariably for reasons other than a lack of function. It pained me a little each time.

But I think there is something a bit deeper that speaks to why no one is making a true successor to the 8000 series printers, and as usual, the reasons aren’t terribly technical.

Obviously, a printer that lasts for 20 years and costs $0.03/page in consumables isn’t making anyone rich, so it was in the best interest of printer manufacturers to increase that while masking the real cost difference in lower toner prices. Conversely, getting people to spend even $500 on a laser printer today is an uphill battle. The market has been largely polarized into either smaller offices who are price sensitive and would rather pay $60 for a 1,500 page toner, or larger offices who lease printers and document centers and pay for consumables and maintenance as a function of the contract. Meanwhile, environmental concerns and regulations are also involved here. Newer printers really are more power efficient, and their lighter weights means that fuel used during shipping is decreased, and more readily recyclable plastic reduces the environmental impact. Finally, just societally, we’re printing less. When was the last time you saw a printed photo that was taken since 2010, that wasn’t explicitly printed for the sake of framing that single image? Back in 2007, small photo printers were all the rave.

My real question, though, is how we’ve been chasing the “new and shiny”. Who wants a 20 year old printer, even if it functions nearly as well as the day it was purchased? Is the environmental impact of a 20 year old printer that much higher than the manufacture, usage, and disposal of 3-5 lower quality printers in the same timeframe? Was there a swath of these printers that failed in the first five years and I just wasn’t around to see it?

 

It’s tough to tell these things with any level of confidence, but I submit to you that when it comes to printers, they legitimately and quantifiably don’t make them like they used to. Whether or not that’s a good thing is a question I will leave you to decide. I, however, will wax sad that I neither have the room nor the print volume to retain one of these printers in my apartment.

My time with a Chromebook…and Linux

For those of you who know me, this is my laptop. It’s massive. It’s heavy. It gets a little under two hours of battery life in power save mode. And I wait for nothing. I game with the best, I render video with the best, and the 3.5 terabytes of storage means that I don’t delete anything. As a pet project, I adopted a second laptop. This is that laptop. It’s small, it’s not nearly as powerful, I had to order a 64GB SD card to give it any meaningful level of storage, and it doesn’t even run Windows. It’s a major change.

This time around, running Linux has been easier than my prior attempts. I’ve been running Linux Mint, and it’s been the epitome of the term “mixed bag”. Now, don’t get me wrong, installing Linux on this particular Chromebook was quite the challenge, and involved some incredible support from Mr. Chromebox, who is a wonderful individual to work with, and highly recommended for anyone looking to embark on a project similar to mine.

The usability of the Linux-running Chromebook has been greatly assisted by the number of applications available for it. Between my remote access software for work having a Linux port, along with our chat software and VPN client, the majority of my “daily grind” applications are covered, though the real assistant has been the number of browser-based control panels I interact with. Having all of that covered and genuinely not needing to worry about battery life, while also carrying it around in one hand with no need for a bag of accessories is a huge help. For the first time basically-ever, the suspend/resume function works so perfectly, I don’t shut it down. My relatively-obscure Samsung printer was automatically found on the network and configured without ever needing a driver download. The Synaptic package manager does an incredible job of being an “App Store” for actual applications, making it easy to find needed programs and browse through categories while also ensuring that updates are handled effectively. I really do like all of these, and when combined with the lack of concern regarding telemetry from either Microsoft or Aunt Google, it really is a good experience.

I used the word “good”, not “perfect”, with intention; it’s the little things that get frustrating. ‘Home, ‘End’, and ‘Delete’ are conspicuously absent. The F1-F10 keys have their Chromebook functions on the caps, requiring me to add P-Touch labels for their “F-Value”. Conversely, the lack of any sort of an “Fn” modifier key means I have to use system try icons for screen brightness and volume, rather than shortcut keys. Actually, the biggest issue has been the audio; despite following a tutorial I found online, I still can’t get audio playback independent of the HDMI port. Then again, in this day and age of autoplaying video ads, one may argue that it’s not a bug, but a feature.

The one application I was not able to find a reasonable analog for was Outlook. Now, don’t get me wrong, Linux has no shortage of e-mail clients. What it does have a shortage of, however, are mail clients that work with Activesync. I tried Eudora, Evolution, Thunderbird, Zimbra, and one or two others, none of which natively supported Activesync. In this process, I did manage to find out that it’s possible to install Android apps, which led me to installing the excellent Touchdown client. This solves my problem, but not without issues of its own. Using a touchpad as a replacement for a finger-driven UI paradigm makes it difficult to select text for copy/paste, instead generally assuming you want to scroll or change focus. Additionally, it is amazing how much there is a necessity for multiple windows when dealing with e-mail, which mobile apps simply don’t provide. My Remote Desktop client connects seamlessly with Windows servers, but I cannot copy/paste text through the RDP session.

You’ll find no shortage of articles online discussing different people’s takes on why Linux has not become a viable contender in the desktop market. I think there’s at least a grain of truth in some of the major ones – my Adobe production studio and DJ software will always ensure I need to keep Windows around, at least provisionally. However, what about the “Word and the Internet” crowd? I think that there are no shortage of people for whom desktop Linux would be more than practical, as can be seen in the success of the Chromebook itself.

I do think, however, it’s largely psychological. People “know Word” and “know Excel”. There is a sense of familiarity that will always be tough to overcome; I would argue that the splash screen that says “Microsoft Word 2016” is the most desired feature of the suite. On the heels of that, I submit that formal computer education today teaches “Microsoft Word”; it does not teach “word processing”. When software titles are taught, rather than principles, it makes change more difficult because there is more perceived different than what truly exists. I think that this reason, along with the “death of a thousand paper cuts”

My experience of “switch hitting” between Windows and Linux will continue to evolve. I am happy I did it. 

Why Communicators and Tricorders will never exist…or shouldn’t.

I decided to dust off my copy of the Star Trek TNG Technical Manual, and see what it had to say about the famous communication devices and “exposition boxes” that became as much a part of Star Trek as Klingons and transporters. By what I read, I’m pretty certain they will never exist.

“But Joey! We’ve surpassed comm badges already! They’re called cell phones, and you have one! How can you say they don’t exist?” Well, that depends on how we define “communicator”. A thing that lets someone else who has a thing talk to each other? Sure, cell phones fit that role in the broadest mindset. However, dig just a little bit deeper and you’ll likely agree with me that they are likely to forever remain a plot device, rather than actually existing.

Let’s start with the most obvious example of this: Neither Kirk, nor Picard, nor Sisko pay a Verizon bill, and Janeway was too far away to do so. While cell phones require the PSTN to function, communicators and comm badges clearly do not. Even if we get as close as currently possible – fully decentralized, open source, peer-to-peer voice communication software, the call is still being carried by one’s ISP, rather than communicating directly between the initiator and recipient.

According to the technical manual, the maximum range of a comm badge is 500 kilometers. Even if we cut that down to 100km, that’s still beyond the horizon of virtually any planet that could be landed on by an away team without the gravity being a crippling problem, meaning that communicators can punch through at least some of the curve of a planet. By contrast, current technology would require roughly 5,000 watts of FM transmission power to achieve something even remotely close without a line-of-site, something very seldom seen between sections of an away team engaging in dialogue. Now, to be fair, it’s highly irregular for parties to be more than a mile or two away from each other when using communicators, but while some high quality Motorola walkie-talkies might get a mile range, they require both batteries and antennas which each exceed the size of a comm badge. Moreover, communicators and comm badges only experience static when relevant to the plot – literally no cell phone owner can say that they’ve been able to thoroughly avoid dropped calls or audio dropouts.

Let’s assume that the range issues are addressed by way of “subspace”; the magical portion of the space-time continuum that powers both warp and communications in the Trek universe. The next massive concern is how communicators decide the recipient of the message. When Kirk says, “Kirk to Enterprise”, does everyone in the ship hear it? Just the bridge? Furthermore, the “Kirk to Enterprise” is frequently heard by the recipients. Even if the computer onboard the ship is ascertaining that the message is for them, do away team communicators have that same ability to discern? Ultimately, the future has no concept of ringtones, except one time toward the end of the Voyager episode “Future’s End, Part 1”. The rest of the time, the message is heard ‘on speaker’, making that scene rather strange. In another episode of Voyager, Chakotay calls “to anyone who’s left”, as the ship has been largely taken over by aliens. How does the communication system know to reference explicitly Starfleet crewmen, and do so ‘on speaker’ without the hostile aliens hearing? I hear you all saying, “It’s just a plot device”, and that’s fine, but it doesn’t jive well with the notion that cell phones are akin to the communicators in Star Trek. The amount of technology required to facilitate communication between the communicators, over great distances, establish the recipient, do so without static or interference, and perform all of these tasks without a communication infrastructure beyond the devices themselves, is far beyond anything we presently possess.

Let’s then discuss tricorders, devices that have a sensor for basically-anything, and a tiny display that makes it nearly impossible to display the results which is only eclipsed by the minuscule size of the controls. Sure, today’s phones have gyroscopes, ambient light sensors, cameras, compasses, even temperature and pressure sensors, but how accurate are the readings? Enough for a racing game, sure, but does anyone use the onboard sensors for measuring with scientific accuracy? No, they do not. Would anyone be comfortable with a doctor taking measurements with an iPhone rather than dedicated tools? That’s unlikely as well. There is still a world of difference between the capabilities of a tricorder as a scientific measuring instrument, and the capabilities of current smartphones. As a counterargument, however, it’s surprising how infrequently (if ever) data communications between tricorders and/or the ship itself are used. If the comm badges in TNG are able to communicate 500km, shouldn’t tricorders have some circuitry of that nature implemented as well? They really should be better at that, and there are no shortage of moments where data transmission or nonverbal communication would have been helpful. I’ll close that thought by addressing the idea that my thoughts on that front only stem from pervasive text messaging that was not prevalent at the time of TOS or TNG. To that, I will say that the inclusion of a small CRT display in TOS implies an intended output, and by TNG, there was no shortage of text-based communication happening over BBS systems, IRC, and Usenet. The idea of transmitting messages that way was far from foreign.

The next time you’re watching an episode of Star Trek and think that their handheld devices already exist in more usable ways, the deeper implications illustrate that there is still plenty of work to be done to achieve the levels of functionality we see used to advance the plot.

The KeyOne, and a reviewer who can’t think beyond himself

I don’t mind carrying an iPhone 6S for work. It’s a good phone. I have maybe a dozen apps, all of which could be websites just as easily, except maybe Swype. Given that I’m not using it as a daily driver, I’m pretty happy with what it does and how it does it…but when I caught wind of the Blackberry KeyOne, I wasted no time pestering my boss about it.

It’s not a phone for everybody, nor is it intended to be. It is, however, intended to serve a niche. That fact eludes David Pierce, the individual who wrote the phone review for Wired Magazine. Go ahead, give it a read. The rest of the review will make less sense if you don’t, but not as little sense as his 4/10 score.

You know what a BlackBerry says about you now? …It says you probably still have an AOL email address, carefully curate your MySpace Top 8…It says, above all else, that you bought the wrong phone.

David starts his review by indicating that people who desire the Blackberry are caught in the past, but provides no basis for this claim aside from alluding to Blackberry’s fall from corporate dominance. It’s an ad hominem attack that indirectly contradicts his next paragraph, indicating that TCL, the real manufacturer and licensee of the Blackberry name and software, makes excellent products. Am I left to assume that a “TCL KeyOne” would have avoided the ‘stuck in 2006’ tone?

The only problem is that physical keyboards are a bad idea. They’re not more efficient, no matter what your nostalgic brain tells you. Touchscreen keyboards are faster, more versatile, more usable.

David might be at least somewhat accurate here, but it also sounds like he’s never dealt with some of the frustration. They’re faster, until you’re entering a password. They’re more versatile, until you’re in a Remote Desktop session that’s expecting a regular keyboard. They’re more usable, until you’re in a remote SSH session cycling through the different sets of symbols. Are these common things? Of course not…but the KeyOne isn’t targeting the Swiftkey crowd.

They can do swipe-typing, change size and shape to your liking, and switch languages at will. They go away when you don’t need them.

Swipe-typing is great, but it’s only needed to keep typing on a virtual keyboard somewhere on par with a physical keyboard. David does make a valid point that the keyboard can be removed from the screen when non-typing tasks are happening, and I do need to give credit for that. I will similarly concur that users requiring multiple languages are indeed better served with on-screen keyboards.

David calls the KeyOne’s 4.5″ diagonal screen “small”, but my iPhone 6S has 3.5 diagonal inches of viewable space with the keyboard present. Moreover, it wasn’t until the iPhone 5 that Apple had a phone with a screen north of 4.5″ diagonal. iPhones sold by the millions with smaller amounts of viewable screen size (and an on-screen keyboard taking up nearly half that when typing), so it definitely seems to be a double standard. 

There is a clear confusion between ‘available features’ and ‘necessary features’. The fact that the KeyOne wasn’t customized to best meet David’s workflow isn’t a shortcoming of the phone. He writes:

You can map each key to a shortcut…but I miss being able to just start typing and launch straight into search. You can swipe up and down to scroll through webpages or apps…But you can also do that, you know, on a screen.

I am certain the shortcut keys can be disabled, or a key could be mapped to open Google, or David could perform one whole tap on a bookmark saved to his home screen; a tap is required on any phone to cause the keyboard to come up anyway. Swiping on the keyboard for scrolling sounds incredible. Not only is your hand not blocking content as you’re scrolling (a huge feature in itself), but I’ve lost count of how many times a “scroll” swipe has been confused with a “touch”, and ended up tapping a link erroneously. Just because it’s possible to do on a screen doesn’t mean that the use of a keyboard can’t improve the process.

Most of the phone’s security work happens in the background, only alerting you if something goes wrong.

How is this a passing comment and not seen as a massive improvement? How many Android users get multiple prompts every time they install an app? Lookout, the ‘security’ software that ships on many Android phones by default, provides more nags and notifications and annoyances than actual positive function. If a phone can be kept secure with virtually no false positives so that alerts can be assumed to be legitimate and worth addressing, that is an incredible improvement for many users coming from the Android ecosystem.

Really, everything about the Keyone other than the keyboard is good enough—and sometimes even great.

David gave a 4/10 rating for a device that, according to this statement, has one drawback?

My point is that you do not want a phone with a hardware keyboard.

A phone with a hardware keyboard is not going to take the world by storm. TCL knows that, Blackberry knows that, Google knows that, and the carriers know that. What the Blackberry does deliver, though, is a phone that serves the needs of those who have always felt Autocorrect was a compromise. A virtual keyboard may be a bit faster if Swype or Swiftkey is frequently accurate, but “out”, “or”, and “our” will always problems. Autocorrect is great for common phrases, but terrible for command line use. Sure, MS-DOS isn’t used by most people today, but I use it more days than not in my line of work.

The most ironic part of David’s rant is the fact that he probably didn’t type this on a virtual keyboard. In all likelihood, it was typed on a desktop or a laptop, with a physical keyboard. I have no proof of this, but even if he did, he would have had more screen space to revise the article while typing if he had used the KeyOne over an iPhone.

Now, if David really wanted to dissuade those of us who believe that a renaissance of Blackberry is a good thing, he could have pointed to the fact that a whole lot of the book 50 Shades of Grey was written on a Blackberry.

x  Powerful Protection for WordPress, from Shield Security
This Site Is Protected By
Shield Security