Why Kinect Kicks It Up A Notch
Kinect: To some, the motion controller is a beautiful way to bring the family together, or for a gamer to get some exercise. Others will throw a fit at the peripheral’s very mention, screaming, “Not buying this now” across their favorite forums just because of an optional feature or two. I speak, of course, of the core gamer. Having had a controller in our hands since day one, it seems some of us are having trouble letting go of it, or even working in conjunction with it and Kinect for that matter.
Being a core gamer myself, this seems entirely foolish, like not eating at a diner because they give you the options of ham and bacon with your breakfast, instead of making you just take bacon. I love my controller, don’t get me wrong, but Kinect adds a new level of depth and immersion, especially in our core games.
Picture this for me, if you will: You’re playing your favorite fantasy game and you’re about to take on an enemy. Stealthily, you creep up though the shadows and take a shot with your bow. The shot grazes his shoulder, and he turns quickly to engage you, broadsword in hand! Your heart starts to pound as he closes in, and you know the only way to survive is to use one of your lightning spells…
So you pause the game and start to dig through the wealth of cluttered menus, searching through thick and thin for a single ability. The immersion is then ruined and your heart starts to calm as you realize: I’m no longer playing a role, I’m simply playing a video game. Using Kinect, mainly its microphones, can help keep the player in his role and truly bring him into the game’s world. Two perfect examples of this are Mass Effect 3 and the recently released Elder Scrolls V: Skyrim Kinect patch.
A problem I’ve always personally had with the Mass Effect series is combat, namely with switching weapons and using skills. Since I play a biotic, I constantly have to pause to open the power wheel and select powers to unleash on my foes, which again breaks immersion. I’m no longer Commander Shepard, but the man holding the controller. Enter Mass Effect 3 and its Kinect usage. Simply say the ability you want aloud and your character will use that power, thus limiting how often you need to pause and keeping the action constant. Since biotics (or any ability-based character for that matter) were limited to only three shortcut powers before Kinect, this makes skills you’d otherwise seldom use more accessible. You may also do this for your squad-mate’s powers and tactics, such as asking them to move up or take cover.
That’s not all it can help you with though, as just about anything can be interacted with via Kinect. Sure, most of them are useless (such as opening doors or picking up items) since it’s simpler and faster to just press a button, but there is one usage that truly takes the cake: conversations. While talking with a character, you may continue the chat by choosing one of the various available responses with your voice. This sucks the player into their role like no other game could, as he can now literally joke with his favorite buddy, yell at that jerk in the mess hall, and whisper sweet nothings in his love interest’s ear, all because of Kinect.
I can hear some of the core gamers now: “But pausing in combat helps us kill things easier! It gives us time to think, duh!” While this is true, is that really what an RPG is about? Yeah, it can make things easier or give the few extra seconds needed to plan, but what of the role you play? While some would disagree with me, I would rather think on my feet like the in-game character than exploit an external pause button; it makes for more believable characters and engaging gameplay. After all, Shepard didn’t kick Saren’s ass because he could pause, did he?
Of course, the peripheral has other uses, like making Skyrim a lot more accessible. My first problem with Skyrim is similar to that of Mass Effect: pausing during combat. Playing a mage, I constantly have to switch out spells in a fight, either due to weaknesses or needing to heal. Sure, there already is a very handy favorites menu where you have shortcut access to anything you deem important; but, yet again, you’re required to pause the game, which seems dampening after you try out Kinect. With the device you’re able to slap a vocal shortcut on things like axes, lightning spells, wards, lights, and even dual wielded items. Simply say “Equip *insert item here*” and your character will brandish whatever tool you’re looking for.
But the feature that the majority of Skyrim players will be most interested in is the Dragon Shouts. With Kinect you may use your Shouts in one of two ways: say the Shout’s name you want to use out-loud in English to use the highest unlocked version; or hold down RB and speak the specific Words of Power you wish to use in Dragon Tongue. Yes, you read that correctly, you get to speak freaking dragon! Now you can shout “fus ro dah” at your TV and something will actually happen. How cool is that? Damned cool, if you ask me!
Of course Kinect has other practical uses in Skyrim. Players can arrange inventory items by weight or value, find quests and locations on the map quickly, automatically loot items found in containers by a weight-to-value ratio, and quick save/load on command. The most helpful of these, however, is follower commands. Before Kinect, players needed to speak to their characters about a task, point and click the place/person/object they wanted them to interact with, and then wait for them to execute the action. Now simply tell your fellow what you want done, and he’ll do it, with considerably less hassle.
The features present in both games help to place the gamer into the shoes of the characters created, giving a tremendous sense of investment. You are Commander Shepard; you live as the Dragonborn. This level of immersion can’t be found anywhere else, nor would it be possible without Kinect, a device we seem to be taking for granted.
Not everything is sunshine and dragon killing though; just like we have two shining examples of how Kinect can augment a game, we have a title that shows just what not to do. I speak, of course, of Halo: Combat Evolved Anniversary. On paper, Halo promises some fun shenanigans: throwing grenades, reloading, and scanning objects using your voice. When implemented, however, it seems to just fail. The time between spoken commands and actual action is hampering, and often quite counter-productive…
Say you’re playing: you have a group of Grunts in front of you, and thinking the best course of action is to frag them all at once you say, “grenade!” while your crosshairs face them… nothing happens immediately. Now under Needler fire, you duck into cover only to have that grenade finally throw, bounce off a wall and land at your feet. Reloading takes just as long to happen using your voice commands as well, making it easier and considerably more effective to use buttons.
Scanning, another usage for Kinect in Halo: CEA, is less of a hassle but more of a novelty. Players will come across items or people that glow orange; these can be scanned by saying, “Analyze.” This newly acquired info is then put into The Library, where you can view and read up on the item or character, and rotate its 3D model using motion controls. While this is a neat little feature, it simply feels tacked on, as though slapping Kinect on the title would drive sales up – which in turn doesn’t add to quality of gameplay as much as the quantity of features, and in this case less should be more.
Sure, it’s kind of a scary thought when you think about it: any developer can now slap the little purple banner on their games, with less than satisfactory features imbedded into the title. This is not new, however, with people making crappy games with popular titles (any movie-based game or recent Call of Duty title comes to mind) in the hopes to just make a quick buck.
The point I’m trying to make is that when a golden product comes along, the usage of Kinect can make the title so much more engaging and playable; it’s the difference between simply playing as a character, or being that character. With all the time and effort that can be sunk into games now, especially RPGs like Skyrim, wouldn’t you rather get the most in-depth experience possible? I know I would!
I’ve mentioned a lot of controller-plus-microphone action, but what of controller-plus-motion? Well, to be honest, there isn’t any real collaboration between those inputs… yet. Set to release later this year is Steel Battalion: Heavy Armor, the first true combination of motion and handheld controls. For those unfamiliar with the series, it is a mech-combat game where players take control of massive walking tanks (not dissimilar to the Armored Core series).
So, how will it work? Well, basic controls like movement, aiming, and firing your weapons will be handled by the controller; whereas the Kinect will handle gestures made by your upper body, such as pulling down the scopes, starting the engines, and pulling your teammates back into the tank when they flee in terror, and decking them one good one to snap them out of it! How well this is implemented will be up to the developer, and will either turn out brilliantly – giving a sure sign that Kinect has a place in core gaming – or fail on all accounts, possibly shying other devs away from the motion controller. We’ll have to wait with abated breath to see.
But, if it turns out well, what does this mean for the future? Well, more games of this kind, for one, implementing both control styles. More importantly, let’s think a little farther along, shall we? With the three big console developers starting to take the new next-generation consoles seriously, what if Microsoft continues with Kinect’s trend?
Imagine the possibilities, like individual finger detection or eye tracking! Say you’re playing a ninja game (in a universe like Naruto or Ninja Gaiden) where magical jutsu or ninpo abilities are present. Instead of watching your character do elaborate hand motions on screen after a pressing a string of button commands, you could actually learn and execute these yourself. Or perhaps this could be used in educational apps, to teach users things like sign language, which could then be further implemented into games. Say you’re playing a detective game, like L.A. Noire, and your witness is deaf. Players could utilize their new-found knowledge of sign language to communicate with them and break the case.
Taking it further would be eye tracking, which could be used in shooters such as the Battlefield series. Instead of your crosshairs being stuck to the center of the screen, pointed by your thumbstick, imagine them being in the exact spot on-screen that you’re looking at. This would allow you to turn the general camera with the thumbstick, yet aim in a super-precise way using your eyes; thus allowing you to track an enemy easily, preventing you from shooting too far ahead or behind a moving target (which can happen quite often with thumbstick controls). Add in voice command to, say, paint a location for an airstrike or spot and highlight enemies for your teammates, and you have an amazingly immersive and realistic shooter with limitless possibilities.
Regardless of how it’s applied, you have to give it to Kinect: it can be used to improve a game by leaps and bounds. Of course, this is all down to its implementation; if a game is cared for it will definitely flourish, if not, it will die a slow and terrible death. However, most of our core games currently using the device have been massively improved by it, making them all the better because of it. It’s time for core gamers to realize just how important Kinect can be and give it a chance, as it proves it’s no longer a $150 novelty, and that it’s (hopefully) here to stay!
About This Post