Unreal Tournament’s End of Active Development Is A Symptom

So, the news broke today that the reboot of Unreal Tournament was no longer in active development. It’s not much of a surprise: not only has there not been an update to the title in nearly a year, there hasn’t been an update to their development blog in over a year, either.

Now, in addition to being a general fan of the title, the business model was a favorite of mine, too: the game was free with no in-app purchases or lootboxes. A store where users could sell skins and mods and character models was available with Epic Games skimming off the top, and the Unreal Engine 4 powering it would be available for developers of other games to use, with royalties paid on the engine after a certain threshold.

However, Epic Games struck gold with Fortnite. If you haven’t at least heard of it by now, you probably haven’t spoken to an adolescent since the Obama administration. It’s so popular, Sony reversed their stance on cross-platform play for the first time ever in their Playstation ecosystem. Epic released the Android app on its own website, rather than in the Google Play store…and got 15 million downloads in three weeks; by contrast, I’m having a rough time trying to come up with another app not in the Play Store that has broke the first million. It’s that big. The fact that Epic has been focusing on printing money with Fortnite rather than developing Unreal Tournament is not just common sense, it’s almost absurd to try and justify the inverse.

While the unbridled success of Fortnite is undoubtedly a major reason why UT development has stalled, I submit that it’s far from the only reason. After all, Epic Games has been in the business since the 1990s. They are fully aware that empires come and empires go. Minecraft, Angry Birds, Halo, and Doom before it all testify to this fact. I think there’s a deeper reason why.

Unreal Tournament hails from a completely different era in gaming. UT2004 shipped with a level editor and dedicated server software. For some, a part of the fun was making one’s own maps, character models, and even total conversion mods, frequently distributing them for others to enjoy. While quality levels varied significantly, communities formed around map and mod development. Even if you weren’t a developer, one of the major draws to the game was that downloadable content was free, and created by the players.

Fast forward to 2018, and that’s not at all how things work anymore. I can’t recall the last major game release that allowed players to self-host their servers or add their own created content, let alone ship with the tools to do so. New maps and character models are almost exclusively paid add-ons now, and few players remember it any other way. Even those who made their own content for UT in its heyday are likely either employed in some form of design or development, or have moved on to other things.

Those who are still doing this sort of development have a plethora of options, from the open source Alien Arena and FreeDoom to GoldenEye Source and straight up developing their own indie games to release on Steam. With lots of options courting a dwindling number of skilled individuals, Epic counting on ‘bringing the band back together’ was going to be an uphill battle. Moreover, even the sheer player stats probably weren’t great; Quake Champions, Toxikk, and other arena shooters are available as great options for players who aren’t perfectly happy playing UT2004, a game whose mechanics and balance are so well done that the graphics which reflect their era can be readily overlooked.

I don’t think this is really the end of UT development, though. Like I said, empires come and empires go, and while it makes sense for Epic to cash in on Fortnite while it’s a household name, by 2021 (if that long), there will be another game to take the crown. While Fortnite will still probably be popular enough to handle the payroll, the focus will likely shift back to developing and licensing Unreal Engine 4. With hundreds of games utilizing the engine including some heavy hitters like Mortal Kombat X, Spec Ops: The Line, Rocket League, Infinity Blade, the Batman: Arkham series, and of course the Mass Effect trilogy, licensing the engine is far and away the best source of steady income for Epic.

And when game developers are looking around for the engine upon which their next title should be based, there is no better way for Epic to showcase the Unreal Engine to have its namesake available for free.

Call of Duty Black Ops 4 – One More Thing With Which I’m Incompatible

So, I took a little time to try my hand at Call of Duty, Black Ops IIII. And I am left to assume that it’s just one of those things that I simply have a fundamental incompatibility with…either that, or it’s clear that Activision ultimately has no idea how to learn some of the lessons from the games that came before this one.

Now, I’m sure I’m not entirely qualified to speak on the game authoritatively; I own Modern Warfare and the original Black Ops, games whose single player campaigns I’ve started twice and never completed.

I knew going into it that the single player mode was essentially just a tutorial; there were no shortage of pieces written about the fact that the game had no real single player campaign at all. I was also well aware that the game had loot boxes and in-app purchases as integral components of its design.

Jim Sterling has had a number of videos on the topic of lootboxes and microtransactions which I generally agree with, so I won’t go into detail on that front. The bigger issue I have is with the lack of a single player campaign is that adding one is trivial. The first Black Ops game had a story. It was a fairly outlandish one, but CoD has never really had its popularity due to its storytelling. Not having a story-based single player campaign is regrettable, but Unreal Tournament 2004 solved that problem over a decade ago with a simple progression ladder, where multiplayer matches vs. bots were won to advance to the next challenger, and so forth. Its use of the exact same maps and character models as the multiplayer game meant that development time was minimal, it provided players desiring a single player experience a means of doing so, and everyone had a way to get good enough to play multiplayer.

Now, Ben ‘Yahtzee’ Croshaw describes Destiny 2 as a game where the sum total of the objectives is “go to the place and shoot the lads”, with a paper thin story regarding *why* you’re going to the place and shooting the lads. Some readers might say, “but, don’t you like Unreal Tournament, where there’s not only a lack of reason for shooting the lads, but since the lads you’re shooting are in the same arena as you, you’re not even getting the satisfaction of going to the place to shoot them?” Well, yes…but I think there are a few reasons why I hold UT to a different standard than CoD.
First, UT doesn’t have the pretense of realism. For example, the earlier CoD titles that put the franchise on the map had their weapons closely modeled after real firearms, albeit not always military issue. Newer installments have moved away from that attention to detail, but it was a part of the early design. Early CoD games were set in actual historical theaters of war, the first two Modern Warfare installments take place in areas of conflict that are at least somewhat believable, and while Black Ops went for the ridiculous in the back half of the game, it at least began its setting in a historical conflict where one really could see a Black Ops mission taking place. Part of the fun was the fact that players could participate in historical events, and while for many it was likely an excuse to go to the place and shoot lads in uniforms laden with swastikas, there were literally hundreds of first person shooters released before Call of Duty, including iconic titles like Doom and Halo.
Unreal Tournament never did any of this any was always completely fictitious and fantastical in every way, from its remote planets to its impossibly proportioned character models to its brigher colors to its weapon loadout clearly focused on game mechanics, the title was always intended to be taken at face value. Asking why we’re capturing a flag in UT is like asking why we’re stacking boxes in Tetris or eating dots in Pac-Man.

One may well argue that CoD has been moving away from realism for some time, and the lack of a single player campaign simply reflects that sort of shift in focus, with reasoning anywhere from the pragmatic “players were spending 99% of their time in multiplayer anyway”, to the cynnical “A single player campaign, even a simple progression ladder, would conflict with Activision’s primary objective: sell lootboxes/DLC maps/live services”. Moreover, there are probably some who would say that my relative inexperience in playing CoD is a part of the problem. That too is a distinct possibility. Raycevick, who has played them, discusses this in greater detail. However, I submit that if Black Ops IIII is the natural progression of the title, it starts looking more and more like an arena shooter. Making this transition would put it into a subgenre where the things that made CoD stand out in its earlier iterations start to become a liability…especially when this installment has a $60 sticker price – a selling price so high, I could not find an arena shooter for even half of it. I could, however, find several of them for free – from the open source OpenArena and Alien Swarm to Goldeneye Source, Quake Champions, Unreal Tournament, and the 800-pound gorilla: Fortnite.

My Contacts: An odd thing to get philosophical about

I got my first cell phone for Christmas in 2003. It was a Nokia 3585i, a relic of a bygone era for a number of reasons. At the same time, I am certain there are features that very few Simple Mobile customers utilized. Despite being a prepaid phone, it supported the Nokia PC Suite. I purchased a DKU-5 cable off eBay and did things on my first cell phone that only became commonplace a decade later. I found packs of mobile Java applets around the internet, and would upload them to see which ones worked. Few did, but I did manage to get a handful working. I had custom ringtones I made out of MIDI files I found around the internet, edited using the copy of Cakewalk Plasma I can’t part with. Of course, the cable allowed me to use the phone as a dial-up modem as well. Though I used the Nokia suite natively for my first phone or two, I ultimately ended up using its capabilities to sync my contacts with Outlook – and later still, with ActiveSync.

I am one of very, very few people who can say that I have never lost my contacts, I have never had to type a contact into my phone twice, and I have copied my contacts over to every phone I have ever owned – fifteen of them, if my memory is completely accurate.

Taking a step back, I am perplexed at my own behavior with respect to my contacts. My contacts list is closely guarded – I stopped syncing them with Facebook over five years ago, I have not synced them with Google (actively defending against them using Xprivacy) or iCloud, and only one or two mobile apps get access to my contacts list. They are synced with an Exchange server and are stored locally. I fiercely protect the contact information of my friends.

And yet, it is a set of data I refuse to maintain.

I have 309 contacts. I stopped counting after my 100th person from whom I am estranged for one reason or another. I cannot identify or describe eight of them. Nearly two dozen are for women who have since been married, yet I have not replaced their maiden names. Four are dead. Another four might be. Nearly half likely contain outdated numbers which do not correspond with the individual to whom the number is assigned. On the flip side, I very rarely add numbers to my contacts anymore. I am far more apt to do a search in my text messages or rely on remembering a few digits of the number, leaving my call log to fill in the blanks. Even my own mother’s phone number remains a member of the call log.

 

And in writing this, it’s possible I have figured out why.

 

My contacts list is not terribly useful as a list of people I contact. It is a list of people who had a small part in my life – enough to commit their number to my phone, but not enough to remove the awkwardness of the conversation that would undoubtedly transpire. Perhaps it is a different person entirely, and I could simply write it off as a ‘wrong number’. Perhaps it is the very person specified in the Contacts entry, and the conversation would be a minute long as I realize those people probably do not miss me. Darrell and Gabby will never answer me back, and it’s been three years since I’ve had any contact with Zoe. Maybe it’s a feeling that removing them from my Contacts is like removing the last reminder of them. Maybe I don’t add contacts anymore because I have developed a severe issue with permanence. Even the people who have stuck around for a few years and have long since warranted being added as a contact are still difficult for me.

Then again, maybe it’s because the people that really stick around get their numbers recorded where they need to be. Acquaintances and clients get contacts. The people in my life I call so often I can recite their numbers off the top of my head? They don’t need one.

Caller ID: The Cultural Shift Nobody Realizes

I’ve written before about how disruptive technology doesn’t always cause headlines. It was only twenty years ago that Caller ID was an add-on feature, that Ma Bell charged a premium for, and required a separate box to utilize. There were even commercials for it. Early cell phones didn’t have it at all. Today, it’s almost like cell phones themselves – so ordinary and ubiquitous that trying to explain a world before caller ID is like trying to describe a Yellow Pages or a card catalog.

But this isn’t about the technological advances of the CNAM protocol, or to take another nostalgic journey.

Caller ID’s cultural shift made avoidance possible. Prior to caller ID, one always had to pick up the phone because it could be someone important or desirable to talk to. It could well be a phone solicitor, but if one was arguing with another person, and that person called, there would be some sort of exchange. Even if it was an immediate hang-up, it is still a form of exchange. In a post Caller ID world, calls could be ignored in the same way that written communication like texts, e-mails, and letters could. Without Caller ID, ghosting would not be possible.

On one hand, this is ultimately a matter of technological enforcement of intent. In some contexts, this was a positive measure – people being harassed had a means of mitigation, and phone solicitation became less profitable – and by extension, less common.

On the other hand, it allows people to avoid conflict resolution. Still angry at someone? Intentionally ignore their calls. Don’t want to deal with a bill you’re late on? Ignore the bill collector. Feel like a conversation might not go your way? Send it to voicemail until you think it will.

Like anything else, these things are not always bad. Sometimes, it really is better to have a cool-off period before talking to someone in order to have greater success in resolving a conflict. However, as a society, Caller ID gave us a means of conflict avoidance, rather than conflict resolution. We got used to those capabilities and took them with us to our cell phones and text messages and IM apps, making sure that “block user” was always a possibility in every new communication app we used.

I’ll reiterate that these measures are good things when being used to ensure safety and security. The cultural shift, however, isn’t about the use of Caller ID in matters of safety. The shift is in the ability to use them for reasons of comfort, convenience, and control. Deciding whether we have used these tools for good or for ill is an exercise I shall leave to my readers.

Creating both an internal and a guest Wi-Fi network on a Sonicwall

I have a hate-hate relationship with Sonicwall. They’re annoying when they don’t work. I recently had to conjure up a procedure about how to configure a new Wi-Fi enabled Sonicwall with two different Wi-Fi networks, one for internal use, and the other isolated for guests. Here is that tutorial. It assumes an out-of-the-box Sonicwall config, starting with the initial setup wizard…

 

1. When going through the initial setup wizard, do NOT specify any Wireless settings.

2. For the internal wireless, use the Wi-Fi wizard. Set its IP Assignment to “Layer 2 Bridged Mode”; bridge to X0. Give it a useful SSID and be sure to use the WPA/WPA2 mode and give it a password. Do NOT create an additional virtual AP in this wizard.

3. Go to Zones, then Add a new zone. Set its security type to Wireless. Defaults are fine; if you’re being fancy, the Guest Services page allows for a captive portal to be set.

4. Go to Interfaces, then Add Interface, and choose Virtual Interface. Assign it to the Zone you just made, and give it a VLAN tag (10 is what I tend to use). Make its parent interface W0, and set its subnet mask to something bigger than a Class C (255.255.252.0 is what I tend to use). Click OK, and confirm the notice saying the Sonicwall can’t be configured from the VLAN.

5. Go to Network->DHCP Server. Click ‘Add Dynamic’. Check the ‘Interface Pre-Populate’, and choose the VLAN you just made. Go to the DNS tab, and add some public DNS servers, especially if you’re in a network with a domain controller.

6. Go to Wireless, then Virtual Access Point. Click ‘Add’ under the Virtual Access Point section. Give it a name and an SSID, and set the VLAN ID to the one you made earlier. Under Advanced’ settings, set the Authentication type to WPA2-PSK, the cypher type to AES, and the ‘Maximum Clients’ to 128. Add a passphrase, then click OK. Also, you might want to edit the original SSID to allow 128 wireless clients as well, instead of the default 16.

7. Still in the Wireless->Virtual Access Point area, Edit the “Internal AP Group” in the Virtual Access Point Groups” section. Add the additional SSID you just created to the Internal AP Group. Click OK to exit.

8. Go to the Wireless->Settings area. On the drop-down labeled “Virtual Access Point Group” on the bottom, select the Internal AP Group option. Click Accept on the top.
(note: if you get an error saying “Status: Error: Too small 802.11 Beacon Interval for Virtual Access Point”, go to Wireless->Advanced, change the Beacon Interval to 500, and try this step again).

It will take about one minute for all SSIDs to be visible to devices…but you will have properly configured everything when you are done.

The Update Virus

We live in a technological world where ‘updates’ are basically an expectation of virtually anything connected to the internet, and I’m uncertain that it’s the best thing for a number of reasons.

I was discussing this with a friend of mine last week. We were discussing how previous generations of video games didn’t have an update mechanism. If the game had bugs on the day when the cartridges were programmed or CDs were pressed, players got the glitches, and the game’s reputation would reflect them. This gave incentive for developers (both for games and other forms of software) to do a good amount of quality assurance testing prior to release. The ship-then-patch model has removed lots of QA testing, letting the initial players’ complaints and Youtube uploads from paying customers fill the gap. In other words, it has created an incentive for companies to use paying customers, rather than employees, as QA staff.

 Mobile apps have gotten just as bad. With pervasive LTE, apps have less incentive to optimize their code, leading to situations like the Chipotle App requiring nearly 90MB of space. This could be done in maybe 10MB of code (arguably less), but instead there’s a whole lot of unoptimized code which reduces available storage for users, increases the likelihood of bugs, and the interminable cycle of storage management for folks with 16GB of storage – a near-infinite amount just 20 years ago. Moreover, update logs used to describe new features, optimizations, and specific bug fixes. The majority of change logs have devolved into saying little more than “bug fixes and improvements”. Is the bug I’m experiencing on that got fixed? The fact that the Netflix app will no longer run on a rooted phone isn’t something that made it into a change log. Yet, with basically no information, many people allow desktop applications and mobile apps to update themselves with little accountability.

The fact that both Windows and many Linux distributions perform major updates at a semi-annual cadence is itself telling. The PC market has been fragmented for most of its existence. Even after it became the Windows/Mac battle (RIP Commodore, Amiga, and OS/2), there was a span when a person’s computer could be running Windows 98SE, 2000, NT, ME, or XP. Yet somehow, in a world prior to lots of things just being a website, and where users could have dial-up or broadband, and a 233MHz Intel Pentium II (introduced in May 1997) or a 1.2GHz P4 (introduced in January 2002), 64MB of RAM or 320MB of RAM, a hardware GPU or not, 640×480 screen size through 1280×1024. In a far more fragmented computing landscape, it was possible for software developers to exist and make money. There was little outcry from end users expecting “timely updates” from Dell or IBM or Microsoft. The updates that did come out were primarily bug fixes and security patches. There weren’t expectations upon software developers or hardware OEMs to list “timely updates” as a feature they were aiming to achieve.

So, why do I call it the ‘update virus’? Because the major OS vendors (Apple, Google, Microsoft) are all getting to the point where constant updates are an expectation, they’re not just ‘security updates’ but ‘feature upgrades‘. Many end users and fellow technicians I interact with have a condescending mindset towards those who choose otherwise. At first glance, I can’t blame people for being okay with new features for free, but the concern I’ve got is how monolithic it is. It is not possible to get only security updates on any of the three major platforms. UI changes are part and parcel with that. All three operating systems have gotten to the point where there is no “go away” button; Android’s OS release options are “update now” or “remind me in four hours”. Really? Six nags a day? No user choice in the matter? There was a massive outcry when Lollipop came out; the massive UI changes were difficult to deal with for many. There were more than a few reports of measurably decreased battery life. I recently bought a tablet for my mother where updates made it impossible to access the Play Store; my only option was to root the tablet so I could disable the update, because the stock firmware ran fine. Is this truly an improvement? 

Now, most people will argue “but security! but malware!”, and to an extent, they are right. Expecting users to manually disable SMBv1 for the sake of stopping the infamous Wannacry ransomware from spreading is certainly the epitome of ‘wishful thinking’. By contrast, I recently had a situation where the laptop I use for controlling my lighting rig when I DJ failed to Windows Update immediately before the event, getting it stuck in an unbootable state and making it impossible to use for its intended purpose. On what basis is that behavior not precisely the sort of thing typically associated the very malware from which Windows Updates are purported to protect?

 

Ultimately, I like “good updates”. Whether it is because they fix security holes or because they optimize a feature, I am very much in favor of good updates. I do not like “bad updates” – the ones that break an OS installation, or install at the worst possible time, or massively revamp the UI without a “classic mode”, or similarly prevent my devices from performing their intended function. With no way to determine the good ones from the bad, updating has gone from a best practice to a gamble.

And if I wanted to gamble, I’d go to a casino.

10 lines to annoy annoying parents

So, I’m not a parent. I have no intrinsic desire to become a parent. I applaud those who become parents, and my heart goes out to those who wish to become parents but are unable to do so.

That being said, a whole lot of people don’t seem to understand why I feel the way I do. I have thus compiled this list of things to say, which generally help even the score.

“I have only seen Frozen twice. I have only seen Moana once. Also, I do not know the theme song to Doc McStuffins, Paw Patrol, Sofia the First, or My Little Pony. Did I miss the show your child is into? Sorry, it’s been a few months since I’ve watched Nick Jr.”

“I had seven uninterrupted hours of sleep last night.”

“Know who was responsible for the last five spills in my house? Me. Also, three of them happened during the Obama administration.”

“Sure, I can cover lunch. I’ll take it out of my nonexistent diaper budget.”

“If I’m amidst an argument that defies any logic, it’s probably with a client…who pay me hundreds of dollars an hour to tolerate their drama.”

“Sure, it’s possible I’ll change my mind. It doesn’t happen multiple times daily, though.”

“I haven’t done laundry yet this week.”

“My carpet is stain-free.”

“I had exactly what I wanted for breakfast, after waking up when I wanted.”

“I woke up to a bunch of noise this morning as well. The construction crew across the street will be done in about a month, though.”

Mass Effect Andromeda: One(ish) Year Later

Youtube content creator Raycevick has made a number of videos discussing video games and how they have aged. Unsurprisingly, his entry on Mass Effect and its sequels have been watched by me several times. He hasn’t done one on the Andromeda entry yet, which is unsurprising given how little time has passed relative to the titles he has already reviewed. 

What caused me to do it, however, is the fact that I finished a second play through a few months ago. To my recollection, it’s the first time I played through a Mass Effect game twice in the same year. In retrospect, I found that strange for a game I felt so mixed about at the beginning. Conversely, I considered the possibility that my opinion may well have been swayed by the familiarity I had with the original trilogy, thus causing me to believe it would be worth giving it a fresh review. Sorry not sorry, some spoilers are present.

 

I think it was the combat which drew me back for a second round. Really, it had to have been. Having gotten used to the “profile” system and the research and development mechanics, I was able to more effectively utilize them this time. My new mouse didn’t hurt, either. Combat was so different from the original trilogy; it took quite a bit of time to acclimate to its differences. The lack of a pausing mechanic, the ability to mix tech and biotic powers, and the jump jets allowing for dodging and hovering in conjunction with true freedom of weapon choice culminates in a far more compelling form of combat than earlier instances. The XP point spending has been polished to perfection, there are plenty of sequences that bring tension with varied enemies requiring a variety of tactics, and the ability to carry both ammo-based and energy-based weapons gives lots of player agency. It’s unsurprising that Frostbite, the game engine originally built for the Battlefield and Medal of Honor franchises, provided the developers with the tools to get the combat aspect of the game right.

Raycevick describes Mass Effect as “a trinity of combat, conversation, and exploration”. Though I’ve lauded the combat component, it’s amazing how much the exploration side falls flat. In my earlier review, I criticized the volume of fetch quests and how much of a time suck they were. It didn’t help that the exploration aspects were a function performed by the player, rather than the character. In virtually every situation, we land where there are already people – not just the baddies, but colonists. How is my avatar boldly going where plenty of people have been before? It’s being revealed to me as a player, but isn’t truly blazing a trail. Now admittedly, I’m having trouble coming up with a better way of making it seem like “exploration” is happening as much to my character, but I’m hard pressed to come up with a worse way to do it than to make me feel like I’m the cluster’s UPS service. I was annoyed by this even further as a result of the fact that I can send strike teams to complete “Apex Missions”…but I can’t delegate side quests to them?

The conversation side is lackluster as well. I’ll emphatically reaffirm the fact that the in-vehicle banter between characters is the highlight of the dialogue. In fact, I was thrilled to find a mod that disabled the AI voiceovers which had a maddening habit of cutting off banter I wanted to hear in order to inform me of something I already knew. However, in-person dialog is rife with shortcomings. The oft-maligned facial animations never bothered me, but it’s the nature of the dialog itself. A handful of characters are voiced by talented individuals who give life and expression to those characters, but most sound like they’re not invested in what they’re actually saying. The different responses themselves tend to be almost pointless to pick. Most players would likely admit that since every major decision had some sort of contingency plan in place (e.g. Wreav if the player kills Wrex, a new council [same as the old council] if you don’t save them, etc.), it was just the illusion of choice if you really got down to it. Even so, dialogue options still had impacts. In Andromeda, I literally can’t recall a single discussion I had with anyone that made a difference about anything other than the NPC’s responses within the conversations. In its defense, the final push had a nice ensemble of radio chatter which reflected the assistance from a number of allies met along the way; it was indeed a highlight of the game to have a “come together” moment with far more cohesion than ME3 provided. That one event, however, does not forgive a game’s worth of ignoring the decisions I spent making when the one of the series’ highlights was how decisions more deeply impacted gameplay.

As far as characters go, Ryder never seems to come into her own, and spends the game being the lowest common denominator. She never seems to truly settle into either being ruthless or compassionate, rank-respecting or insubordinate, diplomatic or a fan of “bigger gun diplomacy”, mission oriented or “no squadmate left behind”. She has assorted moments of trying to get something to shine through, but it’s never coherent enough to truly define the character in the same way Shepard did. I can understand wanting to avoid turning Ryder into “just another Shepard”, but going to the other extreme of having virtually no charisma at all is an uninspiring solution.

It’s entirely possible that I’m biased as a result of watching Youtuber Ruskie’s video on the game recently, but I don’t think he’s wrong on his story assessment, either. The Archon is a one-note, imposing but uninteresting villain. There is little motivation to fight the Kett or the Remnant other than “because they’re evil”. We can’t attempt diplomacy with them, and they don’t have motives besides “exalt everyone”. They’re just cannon fodder with a lust for power. This was forgivable in Doom or Goldeneye:007, but for a game with three prior installments that did a pretty good job giving motivations to the antagonists (admittedly, the Reapers being a major exception), the Archon falls flat as a villain. 

On the flip side, the Angara are solid choices for a species with whom to ally, but they (and the Kett) are the only ones met in the game. By contrast, I met people from half a dozen races in my first lap around the Citadel. There had to be more than two races in the cluster, but even if we want to argue that coming up with another race would have been prohibitive, at least give us more distinct Angara cultures. They’ve been fighting the Kett for centuries, they have different planets they inhabit, and they all have the same language and culture and religion? Exploration could have been done in this manner, by meeting several factions (including those at war with each other) and by putting Ryder in the uncomfortable position of attempting to stay in everyone’s good graces and preventing the Milky Way refugees from becoming the common enemy. It’s not just the planets that could be explored, but really, the people.

The family dynamics fail to have any weight. You have a twin, but they are woefully underutilized. It would have been far more interesting for them to have been a squadmate, with the fact that you were chosen to be pathfinder be a point of contention. The crew could on occasion express concern about whether blood runs thicker than water, with the player having to choose between family and crew. It could have been one of the “big choices” to have to kick your twin off the ship or something similar; Mass Effect hasn’t ever explored family in this way before and it could have really been great. Instead it’s a wasted opportunity because the times when family is involved, it feels more like a forced mechanic than something which can truly be explored by the player.

Other things add to its shortcomings. The variety of planets are not as great as they could be, with three of the six being basically deserts. Even if the desert motif was retained, gravity or sunlight could have been the particular management challenge of a given planet. Wildlife is invariably hostile; there’s no equivalent of the pyjak or the space cow which I can recall seeing. The scourge is purely an annoyance which adds literally nothing to the story or gameplay. I romanced Suvi the second time around, and I found it to be a waste – there were maybe four conversations involved, no real bonding or sense that she and Ryder were really falling for each other, and I found it a strange double standard that while Suvi’s climactic romance scene was a kiss and a dip-to-black, Peebee’s…wasn’t. I can appreciate the planets having descriptions, but I submit that it doesn’t fall into place for those descriptions to exist if I’m just discovering them for the first time. These are just a few of the minor annoyances which collectively become not-so-minor collectively.

In conclusion, I think I stand by my original musings – Andromeda is a pretty good game. It is not, however, a good Mass Effect game. The standard is simply higher for a game whose title is Mass Effect, and it was a standard which Andromeda failed to meet on a number of fronts. Andromeda‘s combat may well have been an improvement over the original trilogy to the point where I played Andromeda twice in one year, but it’s the story that keeps me coming back to the first Mass Effect game ten years after I played it the first time. Do I think I’ll have played Andromeda six times by 2027? Probably not.

Infinity War vs. Justice League, and DC’s Box Office Handicap

This past week, I saw The Avengers: Infinity War. I was also home sick and decided to re-watch Justice League. Minor spoilers of these and other Marvel and DC movies are sprinkled throughout, but nothing major.

It was Justice League that inspired this blog post. A generally held belief is that people are still waiting for the disappointing Marvel movie, while people are still waiting for the DC film that doesn’t disappoint. I think I’ve figured out a few reasons for the disparity. As a quick sidebar, I’ll admit my bias for being a bit more of a DC fan. Sure, ‘feature film’ isn’t their strongest medium, but virtually every TV series they’ve done in the past three decades, animated and acted, have been pretty solid. Also, while Batman: Arkham Asylum and its sequels are excellent games, the only Marvel games I can think of that were any good were their Street Fighter clones – nothing that had its own story to tell.

I am certain that the biggest challenge faced by both The Avengers and The Justice League is the same: they need a villain powerful enough to require more than one superhero to resolve the conflict, and a circumstance dire enough to warrant it. A villain that powerful, however, is nearly impossible to create without devolving into a one-note, insatiably power hungry, god-like being threatening the entire planet. There is basically no other story line which supports the heroes working together in such a manner.

A multi-superhero movie wouldn’t work if they were a villain like Khan. His appeal is the fact that he made the audience second guess whether there was some merit to his cause. Interesting as Khan was (and still is), Iron Man would make quick work of him. It’s similarly impossible to go for the unsettling nuance of a villain like Norman Bates. What made Bates such a memorable antagonist was the fact that he wasn’t some larger-than-life monster, it was precisely that he was so ordinary. That sort of nuance is unsettling when they are the antagonist of an equally ordinary person, but Norman Bates would not have enough time to creep everybody out before Wonder Woman took him out. Even a villain like Darth Vader, the textbook definition for “ominous” or “imposing”, would be a tough sell as an antagonist for The Hulk or J’onn J’onzz; there’s no possibility for a showdown to go his way. Thus, we are left with Thanos, or Steppenwolf, or some other villain who is equally impossible to assign any other personality quality than “more powerful than any two superheroes fighting them”. Any such quality would be subsumed by that power – a power that inherently isn’t human or held by a being that can be any real sort of reflection on the human condition.

As another quick aside, the last point, I believe, is amongst the reasons why Batman villains are as good as they are. Scarecrow’s power is to use people’s deepest fears as a weapon. Two-Face embodies internal conflict. Catwoman’s motivations are primarily self-serving but she’s helped Batman in isolated instances. The Joker is, essentially, Batman’s antithesis and turns Batman’s own moral code against him. An ensemble of enemies who are themselves relatable in conjunction with a flawed protagonist makes an excellent basis for a story, and the lack of one is what makes a superhero movie devolve into “two dudes punch each other until the movie decides one of them actually harms the recipient”.

So, why does DC have this problem more so than Marvel? I think there are a few reasons. First and foremost, I think DC’s biggest challenge is Superman. His laundry list of powers make him a team on his own, and thus no room for internal struggle or conflict – or, conversely, a need for teamwork. A fight between anybody and Superman has no stakes because his only weakness is a hard-to-find substance only billionaires seem to possess. It would have been particularly interesting if Superman’s weakness was something more common, like aluminum – easy enough for him to avoid, but suddenly evens the score as “the thing that can kill Superman costs $20 at Target”. Since it’s not, DC’s first hurdle is far higher – a being Superman can’t beat by himself. This multiplies the motivation and personality problems, because “more power for no reason” or “he’s just evil, okay?” is about the only way to justify an attack requiring more than Superman to resolve.

Second, One of the major issues with superhero groups is the classic question of “who watches the watchmen?”. Marvel handled this with Captain America: Civil War. This movie’s pitfall was that each side seemed to have its adherents split down the middle for the sake of keeping the things evenly matched during the fight scene. It would have been more interesting to have spent more time having the motives of each individual explored and explained, but at some level I’ll need to concede that Marvel’s 12 Angry Men would have a far more limited audience. DC tried tried to tackle the same theme with Batman v. Superman, and it was not nearly as well received. A major part of it was because of the almost nonsensical setup to the fight, along with the fact that the fight could have easily been avoided in a thirty second conversation where Superman just explained what was going on. However, I submit that even with that situation handled differently, the story still wouldn’t have held up. Batman isn’t well known for his unwavering accountability to commissioner Gordon, so for him to be the one having an issue with Superman’s lack of oversight is hypocritical and nonsensical.

Finally, I think there are the “less tangibles”. A few bullet-time shots can add some artistic flair, but over half of the slow motion shots in Justice League were pointless. I think director Zack Snyder uses hard lighting in excess as well. Using heavy contrast in lighting can illustrate a darker mood, but having 2/3 of the movie done that way is enough overkill to leave viewers with a sense of despair that Marvel’s brighter colors help avoid. While Marvel did the single-superhero stories well, DC only seemed to have solid success with Wonder Woman; only she and Superman had a standalone movie prior to Justice League. In practice, this meant that The Avengers could spend more time on the standalone story, while the first half of Justice League was the summarized origin story of The Flash, Aquaman, and Cyborg (and, dealing with the, ehm, “Superman Situation”).

I’ll close with this – DC truly shined in their early 2000s animated series Justice League: Unlimited. They did this by making it primarily a compendium of smaller stories. In most cases, one or two superheroes would address a particular foe or circumstance, rather than the Justice League battling in concert every time. These stories were excellent in their depth and complexity precisely because they avoided “the world almost ending on a weekly basis”. There were so many great episodes and scenes that I feel it’s a great counterargument to the Marvel movies – the series was done in such a way that it’s near impossible to retell its stories in a movie format.

Review: TP-Link EAP Series Access Points

Long story short, I’ve wanted to upgrade the wireless connectivity in my apartment for some time. I’ve also been pretty impressed with TP-Link reinventing itself, from a bottom-of-the-barrel manufacturer typically grouped with Tenda and Trendnet, to creating genuinely solid products at competitive prices and becoming a solid mid-range competitor in the network/connected space. They’re one of the few companies where a packaging refresh seemed like more than just a graphic designer winning over the VP of Marketing, and instead reflected a shift in the products themselves.

I recently got addicted to buying the TP-Link LB100-series smart bulbs, primarily because they were the only ones on the market I could verify didn’t require the creation of an online account, and would function on a LAN even if I blocked the bulbs from getting to the internet using my firewall. Their managed switches have been solidly built and have a much better UI than in years past, and though it wasn’t the greatest performing router ever, the AC750 did a solid job in the versatility field, being either a router, an access point, a range extender or a bridge, depending on what was needed.

So when I saw they were making centrally managed access points at half the cost of even Ubiquiti, I needed to give them a try.

Two days in, and I’m 50/50 on them. Normally, I utilize either Ubiquiti or Ruckus access points. The latter being one of the industry standards along with Aruba and Cisco, but at $400 for their lowest end access points (and requiring licenses, support contracts, and controllers for even modestly-sized rollouts), it’s a bit of a sticker shock if you’re not the sort of venue that doesn’t house a thousand people on a regular basis. In my experience, Ubiquiti offers 80% of the function for 30% of the price, but at higher densities, the differences become more apparent. I was hoping that TP-Link having a number of similar features listed on the box would make them an option for those who want features at consumer prices, or as my friend Charisse puts it, “champagne taste on a beer budget”.

One thing that was notable immediately was that the TP-Link EAP225 offers both standalone functions and centralized management functions. While Ruckus supports this, Ubiquiti requires a controller of some kind to be present. TP-Link’s central management software takes a number of cues from Ubiquiti’s UI, which was helpful. Setting the SSID and password were trivial, and I was happy to see client isolation options and the ability to configure a separate network without VLANs; admittedly I could use the included DC power adapter instead of my unmanaged switch and configure VLANs in Tomato, but that would defeat the purpose of having a PoE switch.

What I don’t like about it, however, is something that is likely to be fixed in the coming months. The software is annoying. According to the software, my AP is “Provisioning”, as it has been for the past three hours, happily functioning as configured. The software doesn’t auto-download firmware; users need to download the files and specify their locations. Furthermore, I had to force-reset my AP after ten minutes as it didn’t come back the way it was supposed to, then re-adopt and re-provision it. Short of bricking the AP, this is basically the worst experience for a firmware update. Ubiquiti and Ruckus both handle these seamlessly, and can even do them on a schedule.

The reason I attempted the firmware update in the first place was because it wasn’t reporting what devices were connected, though it did have a number listed. Now, to their credit, the firmware update did ultimately solve this. Also, the “locate” control for everyone else causes the status LED to rapidly flash, flash a different color, or similar, which is incredibly helpful when trying to label APs in the software. TP-Link, however, just jumps to the ‘Map’ area, which is an utterly pointless function for an AP that hasn’t been placed on a map.

Finally, my apartment isn’t big enough to need two of these, so I have no idea how well the roaming and client distribution works…yet. Also, to their credit, this AP vastly outperforms my old one, a Linksys WRT1200AC running in AP mode with DD-WRT. The Linksys was lucky to get 8MB or 9MB real-world transfer speeds on the other end of the apartment; I just did a transfer at 27MB from the same spot.

All in all, while I consider Ubiquiti to be very close in function to Ruckus at a sizeable discount, TP-Link is about 60% of Ubiquiti for 50% of the price. The good news is that most of their failings are software-based, and are easily rectified; they could easily bump themselves up to 90% of Ubiquiti as the Omada Controller software improves. I won’t be returning this AP anytime soon, but I won’t be recommending that clients eschew their Ubiquiti or Ruckus systems for them just yet, either.

x  Powerful Protection for WordPress, from Shield Security
This Site Is Protected By
Shield Security