Hello AWS!

…pursuant to me wanting to dip my toe in the water for work, I’ve moved my blog to Amazon AWS. You should find it a bit faster now. Let’s see how well I learn about doing things on AWS.

Voice IVRs Need To Die: A Rant

I had something else planned to write about. But last night, I had a simple question about paying my American Express bill. Like any normal person who had such a question, I made a phone call.

That call took six minutes. One minute was spent getting my answer from Charlotte, two minutes were spent on hold, and four minutes were spent convincing the automated phone system that I did, in fact, need to speak to a representative.

Here is the problem:

No, not the actual command line in itself. I use that all the time. Not for everything, certainly, but I do use one. In 1988, everybody who used a computer used one of these (except of course those Mac and Amiga folks). By 1995…basically nobody did. The reason command line interfaces are relegated to developers and sysadmins is because they have a major flaw: what do you type? It’s not readily apparent what commands get things working, and the list of those commands isn’t intuitively discoverable, either. 

Voice-based phone systems have the exact same problem as command lines. I called American Express to discuss a question about paying my bill…but when I said “question about paying a bill”, it then told me the status of my last payment, and asked me if I wanted to make another one. I then said, “Ask a question”, to then be condescendingly read four paragraphs that amount to ‘look at the website’. Eventually, I just held down the ‘0’ key until it said, “I’ll get you over to a representative”. Then, it asked another question ‘so that it could get me over to the right representative’, and when I answered, it said, “I’m connecting you to a representative”. I’ll also mention that virtually every prompt up until this point ended up with me getting a “sorry, I didn’t get that” prompt. The representative I ultimately got connected to understood my question and answered it in less than a minute.

It evoked memories of the episode of Frasier titled “Roe to Perdition”, in which Martin tries to return an extra $20 bill to a bank, and ends up shouting “PER-SO-NAL!” to one such system. When he gets nowhere, he heads to the bank to talk to a human, who herself gets on the phone and yells ‘personal’ in the exact same way. That episode was aired in 2003, and automated phone attendants utilizing voice prompts remain just as useless as they were nearly 20 years ago. The fact that this technology remains just as problematic today as it was in the year Finding Nemo and Pirates of the Caribbean were released leads me to believe that the issue is more fundamental than it is technical.

I had some hope about two years ago when I saw the demo for Google Duplex. While the demo was met with skepticism by some at the time, it does appear that the tech is being used ‘in the wild’ at this point. I had always hoped that Google would let Duplex integrate with phone systems, where people could ask natural language questions and talk to an AI that’s able to route users to the right place by intelligently making the distinction between “make a payment” and “question about making a payment”. It looks like the technology exists, but unsurprisingly, it hasn’t made inroads into this field.

This leaves the human element far worse off than it could be. Now, I understand the major issue with having human receptionists: people are likely to tell their whole story to the first human they talk to, even when it should really be handled by someone in a specific department – typically billing or support. While my particular question likely could have been answered by just about anyone, it’s obvious that not every question would fit into that. Automated attendants do help to do some base level routing.

What we have now, though, is a command line. It doesn’t look like one, and it might use words instead of commands like “ls -alFh”, but a command line it is. One might argue that it’s more of a menu driven interface with a hidden menu, but either way, when ‘navigating the menu tree’ takes more time than a plurality of the calls it routes, the result is that callers begin from a starting point of frustration, which increases the level of work call center employees must do to help customers who weren’t frustrated when they dialed. Voice prompts make life worse for both sides of a customer service call, even more so when every attempt to guess a command is “I’m sorry, I didn’t get that”. It’s less human and yields no benefit for the owner phone system.

This leads us to the “For X, Press 1”, truly menu-driven phone interface. It’s the least-bad option, but when the late, great Robin Williams can make them part of a stand-up routine, it’s clear that it’s like being able to say, “at least our customer satisfaction levels aren’t as bad as Comcast”…yes, it’s a good thing that it isn’t worse, but that’s not a statement of success. The problem with menus is that, more often than not, they are implemented poorly. The fact that the website GetHuman.com exists is a testament to this. Many phone systems have too many options, commonly landing users on recordings that take too much time to listen to for the next prompt, and have routing loops and unnecessary levels of complexity.

As I’ve considered how phone systems should be laid out, here’s what I’ve come up with: Until Google Duplex and its enterprise components are integrated into a phone system, Phone menus should have no more than five options, and each of those five options should themselves only have one additional menu with five options on it. This can be stretched to three menus of depth if and only if the first menu purely consists of language selection. This leads to a total of 25 possible destinations for a call, and I’m hard pressed to think of businesses where call centers would need more than 25 possible call destinations, not including direct extension dialing. If there are, then there’s probably justification for a second phone number, and the process repeats again.

So, that’s my rant.

Matthew 18 in a Post-Facebook Society

I run a small RocketChat server. Nothing major, just a handful of friends in a private chat, my own personal contribution to the XKCD Chat Platform problem.

I’d love to have more of my friends in it, but RocketChat has a strength that is also its fundamental weakness – the “general” room. Everyone is in it. I can change that behavior if I want, but that’s not the point. 

I’ve got 850 Facebook friends…and only five of them are in RocketChat. Now sure, the nature of the term “Facebook Friend” comes into play here; I’m sure my one FB Friend I met on AIM nearly 20 years ago may not be much of a candidate, nor would the sister of a relatively new friend I met in an online community but sent me a request anyway, but even if I put 90% of my Facebook friends into that category, I still would have trouble getting 85 of my Facebook friends in the same chatroom together.

It would eventually devolve into an argument. That argument would then have chilling effects on discussion thereafter – some people would leave. Others would ignore the general chat and stick to the PMs. Discussion after that would become surface level, as nobody wants to ignite another powder keg. Then, one inadvertently starts, and the cycle begins anew until there’s nobody left except whoever agrees with the last person to win the argument.

I feel like the advice in Matthew 18 is timeless and incredibly relevant, even if you’re more of an Atheist than Richard Dawkins…but I feel like there are concepts between the lines that are worth exploring. For those who aren’t familiar with the passage, it goes like this:

15 If another believer sins against you, go privately and point out the offense. If the other person listens and confesses it, you have won that person back. 16 But if you are unsuccessful, take one or two others with you and go back again, so that everything you say may be confirmed by two or three witnesses. 17 If the person still refuses to listen, take your case to the church. Then if he or she won’t accept the church’s decision, treat that person as a pagan or a corrupt tax collector.

The underlying assumption here is that there is, at some level, a mutual desire to rectify a relationship. Also assumed here is that there is a shared agreement on an authority. Both of those are less of a given in modern society. John Oliver said it well when he described a segment of discourse between an Infowars reporter and a very left-wing protester: “What we do have there is a nice distillation of the current level of political discourse in America: two people, who don’t really know what they’re talking about, being condescending to each other nonsensically until one of them lands a sick burn.” While in Oliver’s clip it’s unlikely that either party had a desire to achieve consensus, I submit that the notion of salvaging a relationship at the expense of winning a particular argument seems sufficiently lost on modern society. Getting one’s perspective shifted is a fundamental requirement in order to make any headway, but the willingness to do so seems to be in short supply.

Once there is agreement that the intent is to salvage a relationship, the private discussion between two people disagreeing is useful because it prevents the spread of rumors and helps to address small grievances on a small scale. To bring two or three witnesses into the disagreement is to provide an outside perspective; ideally one that would impact how both people would approach the disagreement, and hopefully the input would be received well enough to achieve a resolution without things escalating further.

Getting to the ‘take it to the church’ situation here, that part gets a bit interesting because of the concept of ‘church’ at that time – Jesus wasn’t describing a group of several hundred people with an elder board…though thinking about it a bit more, Jesus was talking to a crowd more familiar with the temple system, which very much did have a hierarchal structure and political power so I need to do a bit more research on that topic…but, I think it’s safe to say that there is a case to be made about taking the dispute to a mutually recognized source of authority, to whom both parties consider themselves subject to their ruling.

If, one party decides that the ruling isn’t valid for whatever reason, then “treat them like a tax collector” is notable in that, while they were considered so undesirable in their society that the gospels commonly reference “sinners and tax collectors”, indicating an “even worse than sinner” connotation…but, at the same time, the audience of this teaching still dealt with tax collectors. Perhaps it was begrudgingly, perhaps it was a “get in, collect your taxes, get a receipt, and get out” sort of a deal, but Jews still had to work with them, and every so often, there was a Zaccheus – a tax collector who turned from his ways. 

I think this sort of clear and direct escalation is incredibly relevant today. The fact that society has generally turned to “sick burns” as a way to decide how an argument is won, and winning more desirable than reconciliation, is the sort of fundamental shift Jesus spent time encouraging His followers to avoid. The results of this shift have clearly caused a level of enmity that divides people who could probably “agree to disagree” successfully under Jesus’ system, but are sworn enemies on Facebook.

This leaves me with a sparsely populated RocketChat server, and social gatherings which are fewer and further between than even five years ago. Whether you identify as a follower of Christ or not, I can guarantee there’s someone you disagree with on something. You probably agree with them on ten others. Try focusing on that, and try salvaging a relationship. It won’t be fun, but it will probably be worth it.

AI, Art, and Dictionaries

So, a philosopher from Harvard wrote an article about whether or not artificial intelligence is capable of producing art.

This left me with two major questions: First, how do we define artificial intelligence? Second, how do we define art? I believe the answer to the question hinges on these two things.

Strictly speaking, a computer is capable of creating aesthetically pleasing pieces of media, and have been doing so for decades. Whether an audio visualization counts as art due to them being a result of a computer following a strict set of programming guidelines is the nature of the question – how few inputs does it take before the definition crosses over from ‘program’ to ‘AI’?

The term ‘AI’ seems to be a common enough buzzword, but I don’t think that Data or HAL9000 were deemed AI’s because they could tell bees from 3s with good accuracy (spare a thought for ‘Robot’ from Lost in Space who never even got a name). The Google Duplex system is a bit closer, but even it is incredibly easy to trip up even while staying on topic. Watson is good at jeopardy, but its success in its core purpose – cancer treatment – is a bit less rosy. I submit that current generation of what is called ‘AI’ consists of many very good incremental improvements, and is to be lauded. However, I don’t think it is correct to assign the description of “artificial intelligence” to a computer that can win Jeopardy but not understand the humor behind saying “let’s finish, chicks dig me”.

On the flip side, let’s discuss ‘art’. Though this video has its flaws (most notably comparing the best of the past with the worst of the present), the takeaway here is that what does and doesn’t constitute ‘art’ is so subjective that even defining it is subjective. If I, as a DJ, play a good set for a live event, is it art? If I do the same thing and post the recording on Mixcloud, does it then become art? If I produce a song using the sounds and plugins of Ableton or FL Studio and nobody else hears it, is it art? Does it become art if I do this a dozen times and release an album? Is is more or less ‘art’ than Handel’s The Messiah? Is beauty truly in the eye of the beholder, or is there really a need for some sort of a governing body who defines what ‘art’ is, especially for exhibitions? If the latter, then how do those people ultimately decide? As one example, to what end does context play a role – does a piece of graffiti become art because it was painted on the Berlin Wall rather than an abandoned subway tunnel or a chalkboard frozen in time?

The fact that it is so difficult to define what ‘art’ really is makes the question of AI producing art fundamentally unsolvable. If art is is defined by self-expression, then the definition of AI would need to include a ‘self’, and that AI would need to have something to express. If art can only come from emotion, then the entire wing dedicated to furniture in the Metropolitan Museum of Art is on shaky ground since a nontrivial number of those pieces were simply ‘ornate contract work’ whose artistic merit is commonly tied to their owners or context. If art is defined solely as something aesthetically pleasing, then “$5 Million, 1 Terabyte” doesn’t fit that bill (unless the case counts as art), but assisted CGI does.

Once we can settle on how to consistently define ‘art’, then we can talk about whether AI can do it. If art can’t be defined, then the source and inspiration become irrelevant, ironically meaning that one can equally argue that AI is capable of creating art and that humans cannot.

Unreal Tournament’s End of Active Development Is A Symptom

So, the news broke today that the reboot of Unreal Tournament was no longer in active development. It’s not much of a surprise: not only has there not been an update to the title in nearly a year, there hasn’t been an update to their development blog in over a year, either.

Now, in addition to being a general fan of the title, the business model was a favorite of mine, too: the game was free with no in-app purchases or lootboxes. A store where users could sell skins and mods and character models was available with Epic Games skimming off the top, and the Unreal Engine 4 powering it would be available for developers of other games to use, with royalties paid on the engine after a certain threshold.

However, Epic Games struck gold with Fortnite. If you haven’t at least heard of it by now, you probably haven’t spoken to an adolescent since the Obama administration. It’s so popular, Sony reversed their stance on cross-platform play for the first time ever in their Playstation ecosystem. Epic released the Android app on its own website, rather than in the Google Play store…and got 15 million downloads in three weeks; by contrast, I’m having a rough time trying to come up with another app not in the Play Store that has broke the first million. It’s that big. The fact that Epic has been focusing on printing money with Fortnite rather than developing Unreal Tournament is not just common sense, it’s almost absurd to try and justify the inverse.

While the unbridled success of Fortnite is undoubtedly a major reason why UT development has stalled, I submit that it’s far from the only reason. After all, Epic Games has been in the business since the 1990s. They are fully aware that empires come and empires go. Minecraft, Angry Birds, Halo, and Doom before it all testify to this fact. I think there’s a deeper reason why.

Unreal Tournament hails from a completely different era in gaming. UT2004 shipped with a level editor and dedicated server software. For some, a part of the fun was making one’s own maps, character models, and even total conversion mods, frequently distributing them for others to enjoy. While quality levels varied significantly, communities formed around map and mod development. Even if you weren’t a developer, one of the major draws to the game was that downloadable content was free, and created by the players.

Fast forward to 2018, and that’s not at all how things work anymore. I can’t recall the last major game release that allowed players to self-host their servers or add their own created content, let alone ship with the tools to do so. New maps and character models are almost exclusively paid add-ons now, and few players remember it any other way. Even those who made their own content for UT in its heyday are likely either employed in some form of design or development, or have moved on to other things.

Those who are still doing this sort of development have a plethora of options, from the open source Alien Arena and FreeDoom to GoldenEye Source and straight up developing their own indie games to release on Steam. With lots of options courting a dwindling number of skilled individuals, Epic counting on ‘bringing the band back together’ was going to be an uphill battle. Moreover, even the sheer player stats probably weren’t great; Quake Champions, Toxikk, and other arena shooters are available as great options for players who aren’t perfectly happy playing UT2004, a game whose mechanics and balance are so well done that the graphics which reflect their era can be readily overlooked.

I don’t think this is really the end of UT development, though. Like I said, empires come and empires go, and while it makes sense for Epic to cash in on Fortnite while it’s a household name, by 2021 (if that long), there will be another game to take the crown. While Fortnite will still probably be popular enough to handle the payroll, the focus will likely shift back to developing and licensing Unreal Engine 4. With hundreds of games utilizing the engine including some heavy hitters like Mortal Kombat X, Spec Ops: The Line, Rocket League, Infinity Blade, the Batman: Arkham series, and of course the Mass Effect trilogy, licensing the engine is far and away the best source of steady income for Epic.

And when game developers are looking around for the engine upon which their next title should be based, there is no better way for Epic to showcase the Unreal Engine to have its namesake available for free.

Call of Duty Black Ops 4 – One More Thing With Which I’m Incompatible

So, I took a little time to try my hand at Call of Duty, Black Ops IIII. And I am left to assume that it’s just one of those things that I simply have a fundamental incompatibility with…either that, or it’s clear that Activision ultimately has no idea how to learn some of the lessons from the games that came before this one.

Now, I’m sure I’m not entirely qualified to speak on the game authoritatively; I own Modern Warfare and the original Black Ops, games whose single player campaigns I’ve started twice and never completed.

I knew going into it that the single player mode was essentially just a tutorial; there were no shortage of pieces written about the fact that the game had no real single player campaign at all. I was also well aware that the game had loot boxes and in-app purchases as integral components of its design.

Jim Sterling has had a number of videos on the topic of lootboxes and microtransactions which I generally agree with, so I won’t go into detail on that front. The bigger issue I have is with the lack of a single player campaign is that adding one is trivial. The first Black Ops game had a story. It was a fairly outlandish one, but CoD has never really had its popularity due to its storytelling. Not having a story-based single player campaign is regrettable, but Unreal Tournament 2004 solved that problem over a decade ago with a simple progression ladder, where multiplayer matches vs. bots were won to advance to the next challenger, and so forth. Its use of the exact same maps and character models as the multiplayer game meant that development time was minimal, it provided players desiring a single player experience a means of doing so, and everyone had a way to get good enough to play multiplayer.

Now, Ben ‘Yahtzee’ Croshaw describes Destiny 2 as a game where the sum total of the objectives is “go to the place and shoot the lads”, with a paper thin story regarding *why* you’re going to the place and shooting the lads. Some readers might say, “but, don’t you like Unreal Tournament, where there’s not only a lack of reason for shooting the lads, but since the lads you’re shooting are in the same arena as you, you’re not even getting the satisfaction of going to the place to shoot them?” Well, yes…but I think there are a few reasons why I hold UT to a different standard than CoD.
First, UT doesn’t have the pretense of realism. For example, the earlier CoD titles that put the franchise on the map had their weapons closely modeled after real firearms, albeit not always military issue. Newer installments have moved away from that attention to detail, but it was a part of the early design. Early CoD games were set in actual historical theaters of war, the first two Modern Warfare installments take place in areas of conflict that are at least somewhat believable, and while Black Ops went for the ridiculous in the back half of the game, it at least began its setting in a historical conflict where one really could see a Black Ops mission taking place. Part of the fun was the fact that players could participate in historical events, and while for many it was likely an excuse to go to the place and shoot lads in uniforms laden with swastikas, there were literally hundreds of first person shooters released before Call of Duty, including iconic titles like Doom and Halo.
Unreal Tournament never did any of this any was always completely fictitious and fantastical in every way, from its remote planets to its impossibly proportioned character models to its brigher colors to its weapon loadout clearly focused on game mechanics, the title was always intended to be taken at face value. Asking why we’re capturing a flag in UT is like asking why we’re stacking boxes in Tetris or eating dots in Pac-Man.

One may well argue that CoD has been moving away from realism for some time, and the lack of a single player campaign simply reflects that sort of shift in focus, with reasoning anywhere from the pragmatic “players were spending 99% of their time in multiplayer anyway”, to the cynnical “A single player campaign, even a simple progression ladder, would conflict with Activision’s primary objective: sell lootboxes/DLC maps/live services”. Moreover, there are probably some who would say that my relative inexperience in playing CoD is a part of the problem. That too is a distinct possibility. Raycevick, who has played them, discusses this in greater detail. However, I submit that if Black Ops IIII is the natural progression of the title, it starts looking more and more like an arena shooter. Making this transition would put it into a subgenre where the things that made CoD stand out in its earlier iterations start to become a liability…especially when this installment has a $60 sticker price – a selling price so high, I could not find an arena shooter for even half of it. I could, however, find several of them for free – from the open source OpenArena and Alien Swarm to Goldeneye Source, Quake Champions, Unreal Tournament, and the 800-pound gorilla: Fortnite.

Creating both an internal and a guest Wi-Fi network on a Sonicwall

I have a hate-hate relationship with Sonicwall. They’re annoying when they don’t work. I recently had to conjure up a procedure about how to configure a new Wi-Fi enabled Sonicwall with two different Wi-Fi networks, one for internal use, and the other isolated for guests. Here is that tutorial. It assumes an out-of-the-box Sonicwall config, starting with the initial setup wizard…

 

1. When going through the initial setup wizard, do NOT specify any Wireless settings.

2. For the internal wireless, use the Wi-Fi wizard. Set its IP Assignment to “Layer 2 Bridged Mode”; bridge to X0. Give it a useful SSID and be sure to use the WPA/WPA2 mode and give it a password. Do NOT create an additional virtual AP in this wizard.

3. Go to Zones, then Add a new zone. Set its security type to Wireless. Defaults are fine; if you’re being fancy, the Guest Services page allows for a captive portal to be set.

4. Go to Interfaces, then Add Interface, and choose Virtual Interface. Assign it to the Zone you just made, and give it a VLAN tag (10 is what I tend to use). Make its parent interface W0, and set its subnet mask to something bigger than a Class C (255.255.252.0 is what I tend to use). Click OK, and confirm the notice saying the Sonicwall can’t be configured from the VLAN.

5. Go to Network->DHCP Server. Click ‘Add Dynamic’. Check the ‘Interface Pre-Populate’, and choose the VLAN you just made. Go to the DNS tab, and add some public DNS servers, especially if you’re in a network with a domain controller.

6. Go to Wireless, then Virtual Access Point. Click ‘Add’ under the Virtual Access Point section. Give it a name and an SSID, and set the VLAN ID to the one you made earlier. Under Advanced’ settings, set the Authentication type to WPA2-PSK, the cypher type to AES, and the ‘Maximum Clients’ to 128. Add a passphrase, then click OK. Also, you might want to edit the original SSID to allow 128 wireless clients as well, instead of the default 16.

7. Still in the Wireless->Virtual Access Point area, Edit the “Internal AP Group” in the Virtual Access Point Groups” section. Add the additional SSID you just created to the Internal AP Group. Click OK to exit.

8. Go to the Wireless->Settings area. On the drop-down labeled “Virtual Access Point Group” on the bottom, select the Internal AP Group option. Click Accept on the top.
(note: if you get an error saying “Status: Error: Too small 802.11 Beacon Interval for Virtual Access Point”, go to Wireless->Advanced, change the Beacon Interval to 500, and try this step again).

It will take about one minute for all SSIDs to be visible to devices…but you will have properly configured everything when you are done.

The Update Virus

We live in a technological world where ‘updates’ are basically an expectation of virtually anything connected to the internet, and I’m uncertain that it’s the best thing for a number of reasons.

I was discussing this with a friend of mine last week. We were discussing how previous generations of video games didn’t have an update mechanism. If the game had bugs on the day when the cartridges were programmed or CDs were pressed, players got the glitches, and the game’s reputation would reflect them. This gave incentive for developers (both for games and other forms of software) to do a good amount of quality assurance testing prior to release. The ship-then-patch model has removed lots of QA testing, letting the initial players’ complaints and Youtube uploads from paying customers fill the gap. In other words, it has created an incentive for companies to use paying customers, rather than employees, as QA staff.

 Mobile apps have gotten just as bad. With pervasive LTE, apps have less incentive to optimize their code, leading to situations like the Chipotle App requiring nearly 90MB of space. This could be done in maybe 10MB of code (arguably less), but instead there’s a whole lot of unoptimized code which reduces available storage for users, increases the likelihood of bugs, and the interminable cycle of storage management for folks with 16GB of storage – a near-infinite amount just 20 years ago. Moreover, update logs used to describe new features, optimizations, and specific bug fixes. The majority of change logs have devolved into saying little more than “bug fixes and improvements”. Is the bug I’m experiencing on that got fixed? The fact that the Netflix app will no longer run on a rooted phone isn’t something that made it into a change log. Yet, with basically no information, many people allow desktop applications and mobile apps to update themselves with little accountability.

The fact that both Windows and many Linux distributions perform major updates at a semi-annual cadence is itself telling. The PC market has been fragmented for most of its existence. Even after it became the Windows/Mac battle (RIP Commodore, Amiga, and OS/2), there was a span when a person’s computer could be running Windows 98SE, 2000, NT, ME, or XP. Yet somehow, in a world prior to lots of things just being a website, and where users could have dial-up or broadband, and a 233MHz Intel Pentium II (introduced in May 1997) or a 1.2GHz P4 (introduced in January 2002), 64MB of RAM or 320MB of RAM, a hardware GPU or not, 640×480 screen size through 1280×1024. In a far more fragmented computing landscape, it was possible for software developers to exist and make money. There was little outcry from end users expecting “timely updates” from Dell or IBM or Microsoft. The updates that did come out were primarily bug fixes and security patches. There weren’t expectations upon software developers or hardware OEMs to list “timely updates” as a feature they were aiming to achieve.

So, why do I call it the ‘update virus’? Because the major OS vendors (Apple, Google, Microsoft) are all getting to the point where constant updates are an expectation, they’re not just ‘security updates’ but ‘feature upgrades‘. Many end users and fellow technicians I interact with have a condescending mindset towards those who choose otherwise. At first glance, I can’t blame people for being okay with new features for free, but the concern I’ve got is how monolithic it is. It is not possible to get only security updates on any of the three major platforms. UI changes are part and parcel with that. All three operating systems have gotten to the point where there is no “go away” button; Android’s OS release options are “update now” or “remind me in four hours”. Really? Six nags a day? No user choice in the matter? There was a massive outcry when Lollipop came out; the massive UI changes were difficult to deal with for many. There were more than a few reports of measurably decreased battery life. I recently bought a tablet for my mother where updates made it impossible to access the Play Store; my only option was to root the tablet so I could disable the update, because the stock firmware ran fine. Is this truly an improvement? 

Now, most people will argue “but security! but malware!”, and to an extent, they are right. Expecting users to manually disable SMBv1 for the sake of stopping the infamous Wannacry ransomware from spreading is certainly the epitome of ‘wishful thinking’. By contrast, I recently had a situation where the laptop I use for controlling my lighting rig when I DJ failed to Windows Update immediately before the event, getting it stuck in an unbootable state and making it impossible to use for its intended purpose. On what basis is that behavior not precisely the sort of thing typically associated the very malware from which Windows Updates are purported to protect?

 

Ultimately, I like “good updates”. Whether it is because they fix security holes or because they optimize a feature, I am very much in favor of good updates. I do not like “bad updates” – the ones that break an OS installation, or install at the worst possible time, or massively revamp the UI without a “classic mode”, or similarly prevent my devices from performing their intended function. With no way to determine the good ones from the bad, updating has gone from a best practice to a gamble.

And if I wanted to gamble, I’d go to a casino.

Review: TP-Link EAP Series Access Points

Long story short, I’ve wanted to upgrade the wireless connectivity in my apartment for some time. I’ve also been pretty impressed with TP-Link reinventing itself, from a bottom-of-the-barrel manufacturer typically grouped with Tenda and Trendnet, to creating genuinely solid products at competitive prices and becoming a solid mid-range competitor in the network/connected space. They’re one of the few companies where a packaging refresh seemed like more than just a graphic designer winning over the VP of Marketing, and instead reflected a shift in the products themselves.

I recently got addicted to buying the TP-Link LB100-series smart bulbs, primarily because they were the only ones on the market I could verify didn’t require the creation of an online account, and would function on a LAN even if I blocked the bulbs from getting to the internet using my firewall. Their managed switches have been solidly built and have a much better UI than in years past, and though it wasn’t the greatest performing router ever, the AC750 did a solid job in the versatility field, being either a router, an access point, a range extender or a bridge, depending on what was needed.

So when I saw they were making centrally managed access points at half the cost of even Ubiquiti, I needed to give them a try.

Two days in, and I’m 50/50 on them. Normally, I utilize either Ubiquiti or Ruckus access points. The latter being one of the industry standards along with Aruba and Cisco, but at $400 for their lowest end access points (and requiring licenses, support contracts, and controllers for even modestly-sized rollouts), it’s a bit of a sticker shock if you’re not the sort of venue that doesn’t house a thousand people on a regular basis. In my experience, Ubiquiti offers 80% of the function for 30% of the price, but at higher densities, the differences become more apparent. I was hoping that TP-Link having a number of similar features listed on the box would make them an option for those who want features at consumer prices, or as my friend Charisse puts it, “champagne taste on a beer budget”.

One thing that was notable immediately was that the TP-Link EAP225 offers both standalone functions and centralized management functions. While Ruckus supports this, Ubiquiti requires a controller of some kind to be present. TP-Link’s central management software takes a number of cues from Ubiquiti’s UI, which was helpful. Setting the SSID and password were trivial, and I was happy to see client isolation options and the ability to configure a separate network without VLANs; admittedly I could use the included DC power adapter instead of my unmanaged switch and configure VLANs in Tomato, but that would defeat the purpose of having a PoE switch.

What I don’t like about it, however, is something that is likely to be fixed in the coming months. The software is annoying. According to the software, my AP is “Provisioning”, as it has been for the past three hours, happily functioning as configured. The software doesn’t auto-download firmware; users need to download the files and specify their locations. Furthermore, I had to force-reset my AP after ten minutes as it didn’t come back the way it was supposed to, then re-adopt and re-provision it. Short of bricking the AP, this is basically the worst experience for a firmware update. Ubiquiti and Ruckus both handle these seamlessly, and can even do them on a schedule.

The reason I attempted the firmware update in the first place was because it wasn’t reporting what devices were connected, though it did have a number listed. Now, to their credit, the firmware update did ultimately solve this. Also, the “locate” control for everyone else causes the status LED to rapidly flash, flash a different color, or similar, which is incredibly helpful when trying to label APs in the software. TP-Link, however, just jumps to the ‘Map’ area, which is an utterly pointless function for an AP that hasn’t been placed on a map.

Finally, my apartment isn’t big enough to need two of these, so I have no idea how well the roaming and client distribution works…yet. Also, to their credit, this AP vastly outperforms my old one, a Linksys WRT1200AC running in AP mode with DD-WRT. The Linksys was lucky to get 8MB or 9MB real-world transfer speeds on the other end of the apartment; I just did a transfer at 27MB from the same spot.

All in all, while I consider Ubiquiti to be very close in function to Ruckus at a sizeable discount, TP-Link is about 60% of Ubiquiti for 50% of the price. The good news is that most of their failings are software-based, and are easily rectified; they could easily bump themselves up to 90% of Ubiquiti as the Omada Controller software improves. I won’t be returning this AP anytime soon, but I won’t be recommending that clients eschew their Ubiquiti or Ruckus systems for them just yet, either.

My definitive guide to reclaiming Windows 10

So, some friends on Facebook were discussing the fact that Windows 10 updates are a problem. They take forever, happen at an inopportune time, and coming back from the major releases means that custom file associations are reset, it’s entirely possible for programs to be uninstalled, and I’ve run into no shortage of instances where I’ve had to revert back because an update caused a computer to be unable to restart…not to mention the fact that computers preparing for major updates run unbearably slow as they download and stage things. Microsoft thinks this is a good idea. They are the only ones.

Let’s make a few things clear here: my steps to resolve these issues are my personal preferences. If you do this, you will prevent your system from getting updates. Software that “works on Windows 10” might assume you’re on the most recent release, rather than whatever release you were on when you did these things. Reversing it all is a pain, and still might not work – just assume it’s a one-way trip. No warranty is provided if you mess things up. With that being said, let’s begin…

  1. Update Windows as much as you can up until this point. You probably don’t want to be running on two-year-old install media. This should also include the SMBv1 fix; no sense in keeping yourself open to WannaCry.
  2. Take care of the major stuff in one shot…
    • Download W10 Privacy: https://www.winprivacy.de/deutsch-start/download/. Extract it, and run it as an admin.
    • Download this file: W10-CustomConfig. Extract it somewhere, too.
    • Click Configuration, then Load. Use ‘Choose Path’ to navigate to where you extracted the ZIP file, and import the INI file.
    • Go through the different tabs and make sure there’s nothing you’d like to change. This is the config I use on my computer, but your needs may be different, so give it a quick once-over.
    • Click ‘Set Changed Settings’, then confirm it. It’ll take a few minutes to finish everything. Reboot when you’re done.
    • After rebooting, from a ‘run’ prompt or a command line (or the start menu search), type “services.msc”. When it loads, scroll down to “Windows Update”. Double click it, set it to ‘disabled’ (if it isn’t already), then click ‘stop’ (if it isn’t already).
  3. Completely napalm Windows Update…
    • Go to c:\windows\system32. scroll down to ‘wuauclt.exe’.
    • Right-click, then click ‘Properties’. Go to the ‘security’ tab.  Click ‘Advanced’.
    • On the top where it lists the owner as “TrustedInstaller”, click ‘Change’. Type your user account name, then click OK. Click OK again to close out the “Advanced” window, then click ‘Advanced’ again to re-open it with the ownership changes.
    • Click ‘Change Permissions’, approving the UAC prompt if needed.
    • Click ‘TrustedInstaller’, then ‘Edit’. Uncheck everything except ‘Read’ (Windows Defender will replace it if you delete it or deny it ‘Read’ permissions). Do the same for the “System”, “Users”, “ALL APPLICATION PACKAGES”, and “ALL RESTRICTED APPLICATION PACKAGES” accounts as well. For added paranoia, remove everyone except “System” and “TrustedInstaller” so that it can’t run in a user context. Click OK, then OK again to commit the changes.
  4. Tell Cortana where to shove it. A word of caution though, if you use Outlook, you won’t be able to do search-as-you-type. You will also wait forever for file system searches to be performed, though if you’re not using Everything to do your file system searches instantly instead of battling the green bar, you don’t know what your missing. Anyway, without further ado…
    • Go to C:\Windows\SystemApps. You’ll see a folder called Microsoft.Windows.Cortana_[something].
    • Go back up to the last step about applying read-only permissions to “System” and “TrustedInstaller”, then do those exact same steps on this folder.
  5. Start Menu Fix.
    • Install Classic Shell. With Windows Update neutered, MS won’t be messing with it, so the final release will be just fine and very reliable. If you’re skittish about that, the five bucks Stardock wants for Start10 is perfectly reasonable. If you want to use the stock Windows 10 start menu, you’re weird, but you won’t see random apps starting up. At the very least, install Classic Shell provisionally, as it gives ‘uninstall’ options for a number of Win10 apps that Windows won’t allow, leaving you with just the core.
  6. Anti-Telemetry.
    • Most of this was addressed with W10 Privacy, as it adds a whole lot of entries to your “hosts” file to minimize the output. However, I strongly recommend using a third party antivirus since Windows Defender will clear the hosts file entries when it runs scans. ESET NOD32 is my personal favorite, and it’s frequently on sale on Newegg.
    • Run these commands from an elevated command prompt:
      sc delete DiagTrack
      sc delete dmwappushservice
      echo "" > C:\ProgramData\Microsoft\Diagnosis\ETLLogs\AutoLogger\AutoLogger-Diagtrack-Listener.etl
      reg add "HKLM\SOFTWARE\Policies\Microsoft\Windows\DataCollection" /v AllowTelemetry /t REG_DWORD /d 0 /f
    • Copy/paste this list of host file entries into yours. For added paranoia, if your firewall runs Tomato, you can use the integrated Adblocker to utilize this hosts file at the router level, though obviously this only protects you while you’re connected to that router.

 

Okay, those are the majors, and the required stuff for me when I first wipe and reload Windows 10 on my laptop. Understand though, that I wipe and reload Windows on my laptop approximately once every 14 months, meaning I’m not woefully out of date. 

Best of luck, everyone!