WebSliders-glass

Is Google Glass Half Full or Half Empty?

We’re huge Apple fans. We all use Macs, and believe Apple still makes the best hardware in the tech space, with an ecosystem that is, for the most part, fluid and relatively seamless—as long as you’re using only Apple products.

But after seeing what Google is up to lately, we’re seeing where Apple is falling short, and that’s data.

Now that a handful of folks have gotten their hands on Glass, there’s been a wide range of opinion on whether it’s revolutionary, or if it will go the way of the Apple Newton.

Our own Ryan Miller was fortunate enough to be able to have a few glasses of wine with someone from Google who was testing out Glass recently, and he also attended a session at SXSWi on how to develop for the platform. He was, to say the least, very impressed.

Obviously, Glass is a unique piece of hardware: a mobile device that’s supposed to get technology out of the way by bringing it closer to you. That sounds counter-intuitive, but it’s true. Especially at a conference like SXSW, most of the people are busy looking at screens rather than talking to each other. A perfect example they cited: the concert phenomenon. When you’re at a live show and the performer comes out on stage, what’s the first thing that happens? Hundreds or even thousands of people pull out their phones and take photos or videos to share. We’re obsessed with showing our networks where we are and what we’re doing. And that act takes us out of the moment. We’re now living that moment through a screen. That’s one of the issues that Google is trying to tackle with Glass: letting us document those moments without taking us out of them. There are other examples below, but it’s in this context where we’re seeing the value of a project like Glass.

First things first. If you haven’t seen their latest video on what it’s like to wear Glass, go watch it now. While they’ve made it clear that this is still very much a work in progress, at the developer session we were able to get a closer look, and the product seems to perform just like what you seen in the demo. From video recording, photo capture, voice recognition, live Google search and important notifications, it worked nearly flawlessly.

This isn’t a device that’s going to replace your smartphone. It’s going to complement it. You’re still going to use your phone for complex tasks like reading longer form texts, sending emails, playing games and using many apps. But Glass (and other wearable computing devices that are sure to come) is trying to help us better manage short form notifications and the constant interruptions on our smartphones. How many times do you reach down to check messages in a day? 50? more?

This brings us to what we think is becoming one of Google’s main strategies, and why we think they may be poaching more iPhone users. The new mobile differentiator is going to be about understanding context. This is going to include letting your devices know where you are, what you’re doing, and who you’re with. Your device could know when and where it’s ok to interrupt you to display a notification, or who you want to receive notifications from. Your phone is going to take all of these context clues and be able to predict what your next needs might be. Where are you going (have a map already cued up in case you need it)? Who are you with (maybe give you a readout of your friend’s latest Facebook updates or Tweets)? What matters to you (if your favorite team is playing or you’re following a particular subject in the news, it can preemptively have updates queued up for you)?

This is where Google really excels, particularly because so many of us are already using their services (Gmail, Google Calendar, YouTube, Maps, G+, just to name a few). Like it or not (and many people don’t), Google already knows an amazing amount about us. It has our data. It already has context. And it can use that context to understand our needs and deliver helpful notifications or contextual clues that can help us as we go about our days, without irrelevant interruptions. It’s a pool of data that Apple doesn’t have at this point. That contextual delivery of information is what Glass is going to be great for, because those deliveries of information will occur within your field of vision, without taking you out of the moment.

There’s a long way to go. Privacy concerns are going to be enormous, especially since you don’t know when someone is recording you. Does everyone feel that comfortable providing Google with all of their contextual data? Their movements? Their messages and friends? Will Glass have the opposite effect? Will it take us out of moments as more and more integrations and notifications become possible? Will there be a social stigma of walking around with a wearable computer and camera on your face? All this remains to be seen, but we’re at least intrigued by what we’ve seen so far and the conversations we’ve had with people who have tested Glass first hand.

What do you think? Would you ever wear Glass? Are you creeped out with the privacy or fashion implications? We’d love to know what you’re thinking, so fire away in the comments!