Monday, 10 November 2014

Amazon Echo analog data collection gets cute


Some time ago I wrote about Kinect 2 and its potential to provide deep insights into not only the games you play but what you watch, when and the real time emotions of the TV viewers in your home.

While Kinect2 is loaded with sensors to capture different channels of data, other "simpler" devices are already acting as covert data collectors in our homes. A year ago it was discovered that some LG Smart TVs were collecting viewing data without the users consent and send it back to LG servers. We should assume LG are not the only connected playing this game.  Now it appears that Amazon will be expanding it data collection activities further into the analog world.  Last week they announced the Amazon Echo a voice controlled music streaming and general home digital servant.



Echo will play music, answer questions via web searches, set reminders, compile shopping lists, check the weather all through simple voice commands. Its promotional video shows it as a cute, augment to the family, helping everyone in small ways to make their lives easier. It gives a feeling of, "how did I ever live without this!" To achieve this magic it follows the Siri, Google Now and Cortana model of sending your requests to the relevant brand's cloud for analysis and returning answers. It is an extension of Amazon's approach of always trying to help their customers. And as with its recommendations and personalised offers it comes at a price. When you are explicitly out to make a purchase consumers may be aware of the trade off of data in exchange for personalisation. But when a device sits in our living room and appears to just be a helpful music streaming, note taking, alarm clock ...thing then we may not be so aware of the trade off. Amazon have done a great job in creating a cute domestic disguise for their spy in the living room. Even the voice is lovely and unthreatening.

The Amazon Fire was the sales assistant in your pocket and despite its lack of success, we know Fire 2 will be much improved. The history of Kindle shows they just keep evolving and improving. Now Echo gives them a spy in our living rooms. I expect Echo2 will extend its capabilities to home automation. So it will increase its scope of data collection to your whole home environment. Amazon have gone from desktop, to mobile, to domestic. As they say in their copy, the more you use it the more it/we gets to know you.


Thursday, 23 October 2014

Smartphones - what ever happened to ergonomics?


Smartphones have become the swiss army knife of the digital age. The expression, "There's an app for that", describes the seemly endless diversity of functions you can use your smartphone. While many of us have dozens of apps most of use very few. Mostly we habitually use a core set which allow us to communicate, discover and record. These three categories also describe three modes of holding our phones.

To communicate we can phone, email, text, or post to social media. To phone we tend to hold the phone up to our ear in the traditional, making a call mode. Despite siri and the improvements in voice to text dictation, the other communication modes are usually done with the phone in one hand at approx 18 inches from our face. Frequently we do this one handed.

We surf the web and discover information using the same single handed mode. When possible we will use two hands for this task, but frequently we are in situations where one hand is more convenient.

Finally we use our smartphones to record our lives in the form of photographs and video. This tends to be a two handed operation, but again it can sometimes be done single handed.

While this is what we want to do with our phones their design is driven by the need to improve the technology that allow these functions to happen. How we actually physically hold the device is not a focus of improvement. What ever happened to ergonomics?

Pick up your iPhone, Samsung, HTC, Sony or any other new smartphone. Their design is primarily focused round the conflicting goals, of incorporating as large a high resolution screen as is practical combined with as much battery life as possible within the slimmest size. The camera, additional hard switches are added in there, but their location and utility is of lower consideration than the screen/battery compromise. The whole package is encased in materials which say as much about the brand values and social statement as the functional needs of the device.


Coupled with this is an ever shortening product cycles. New models come out every year. The core technology increases in capability, the camera gets more better, the pixel density gets higher and the screen seems to inevitably get larger, and the products get more anorexic. The product cycle is more driven by the same forces that gave us ever increasing fins on the American automobiles of the 1950's. It makes last years, or hopefully the competitor's current phone seem, so yesterday.  At no stage in this headlong rush does the ergonomic failings of the slim cuboid seem to be addressed.

When looking at our high definition, retina displays we do not actually hold our phones, we cradle them in our hand balanced on our pinky. One of our most personal and valuable habitually used possessions is balanced in a precarious position. All the people who we see with cracked screens stand testament to this. The larger the screen the slimmer the form factor the harder this single handed use becomes. If you try and grip your device tightly and then use your thumb to select things on screen it quickly becomes tiring. You need a relaxed grip to comfortably use your thumb.

When we take pictures we are holding a thin slab, with none of the ergonomic enhancements actual cameras have, no hand grips, or ridges to make griping easier. Not even a tactile surface which offers some friction to aid grip. And when we hold these slabs to our ears we increasingly need to stretch our fingers to grasp the device.

The software attempts to make concessions to the ergonomic problems which Steve Jobs so rightly pointed out at the iPhones birth. The arc of reach of the thumb is a limiting factor in the size of the display. So now the iPhone 6 has an software trick which brings the top of the screen within reach.

Jonathan Ive is a very clever man, he is a leader of the cult of the design object. Beauty in the materials and details of the finish. Shaving a tenth of a millimeter here, a new radius there. But for all his passion about making the perfect product he rarely makes phones which are holdable in use. The original iPhone and up to the 3s were much more ergonomic devices. But as the phone has evolved and grown to today's iPhone 6 Plus they have increasingly failed in terms of ergonomics. It is not an easy task. Can a multi capable device have one form which satisfies all uses? Maybe not. But which single use do our current phones actually do well.


Thursday, 21 August 2014

Looking at the Scottish Referendum: what drives social media engagement?

As a Scot living in Glasgow, the phrase "we live in interesting times" could not be more apt. In less than a month on the 18th September the country will vote to leave or stay within the union of the United Kingdom.

We are immersed in a continual discussion of the pros and cons of independance. Almost all conversations contain reference to or are focused on the debate. In pubs, on the bus, at the shop till, in restaurants everywhere you turn it is being talked about. None more so than on social media. What is strikes me about the debates on social media is how different they are form other political campaigns or discussions.

In recent years the Obama 2008 campaign stands out as the text book case of how to use the new channel of digital social media to powerful effect. The NYT summed up the importance of his campaign in the wider context of political use of the media:

Thomas Jefferson used newspapers to win the presidency, F.D.R. used radio to change the way he governed, J.F.K. was the first president to understand television, and Howard Dean saw the value of the Web for raising money,” said Ranjit Mathoda, a lawyer and money manager who blogs at Mathoda.com. “But Senator Barack Obama understood that you could use the Web to lower the cost of building a political brand, create a sense of connection and engagement, and dispense with the command and control method of governing to allow people to self-organize to do the work.”



Obama did this with the help of some of the founders of the web 2.0 age, such as Mark Andreessen the founder of Netscape and on the Facebook board. He used a central site barackobama.com to sign up over 2 million supporters and provide the digital and real world tools to campaign on the issues which he stood for and that resonated with them and their communities. He had over 1.2 million friends on facebook trumped his competition on twitter and YouTube. Through the empowerment of a grassroots movement his message of hope can across loud and clear.






In Scotland today the way the Yes and No camps use social media has one striking difference. The No campaign feels very centralist. There is a single message delivered by the BetterTogether group, and echoed by the no sub groups. The Yes camp is a collection of diverse groups who all share the same common goal but have different personal views on why it is important and what it means to them. There is the "official " yesscotland.net, which models itself on the Obama campaign and disseminates information and motivates independent action.  But it is just one in the digital crowd. Many are not SNP(the party who has a majority in the scottish parliament and are the traditional voice of independence) supporters, there are groups from the Greens, The Labour Party, the Socialists the Liberals. Many professions have formed groups, there are groups defined by location, by interests, by race, by gender, by the type of pets they own. Each one is not a carbon copy of the official Yes campaign. They are all singing their own tune. This is in stark contrast to the sense of a monolithic No camp.

Both camps may use stories from traditional news outlets as the kernel of a posting, but rarely is it just a case of sharing a story. Often a group will dissect and analyze the story. Admittedly each is full of their own prejudices and biases, but the effort that goes into blog postings and Facebook and twitter discussions is impressive. Neither are the Yes or No postings/blogs always preaching to the converted. Yes and No supporters will comment and post on each other's blogs and postings. Real discussion goes on. Most of the time it, or what I have seen, is genuine discussion and friendly banter. Only occasionally are real insults thrown.

Celebrity endorsement comes from both camps, but organised collective efforts such as the over 200 celebrities who signed a letter urging Scotland to stay in the Union was met with confused shoulder shrugging by most Scottish social media commentators. However when economists, historians, industry leaders or academics take a stance and provide endorsement or evidence one way or another, they are seized upon by both camps and thrust to the fore to become unlikely champions. Politicians statements from both camps are met with greater venom than those of mere mortals.

Scotland is not voting to elect a president or even a government. Alex Salmond is presented by the No camp as the leader, the boogie man who is attempting to con the Scottish people. To the Yes camp he is merely a vehicle for their ambitions and his role is far less important than that of the people who want Yes. The vote is about the future of the country. It is a very, very significant and unique moment for the people of Scotland. The level of passion it arouses is tangible and is manifest in the digital out pourings. It is this passion which is at the heart of the Yes and No campaigns.

The centralist approach is the model that each official leading campaign group has adopted. But why is it that the Yes campaign groups have run and taken the campaign as their own where as the Nos have clinged to the parent ship more closely? The No campaign has always been on the back foot as it is arguing for the status quo. We are doing very well thank you why rock the boat, why take risks, why go into the unknown?

The Yes groups are all embracing the risks, the uncertainty the lack of hard facts about tomorrow's future. They are characterised by optimism and, to borrow from 2008, Hope. The energy that is seen in the Yes campaign is a reflection of a sense of excitement for a new future, for the work ahead, for the opportunities, not the continuum of the norm.

This campaign, whatever the result, shows that we should not think about engagement. Engagement is a word describing, taking part, sharing, participating. The people of Scotland are not using social media simply out of a desire to take part, or share their views. This is about passion. Intensity, a desire to make change a real need to add their weight to the argument.

In the commercial world the brands which have an active and intense following are those which support or deliver services or products which allow individuals to express or fulfill their passions. When a dull brand or product reaches out to social media to "engage" consumers, they have a real challenge. Humour, incentives, endorsement can all buttress their efforts, but nothing will replace real passion. Seek out the activities, views, needs, desires which induce real passion in your audience and you will have a far greater chance of inspiring genuine interest and productive engagement.



Tuesday, 19 August 2014

My number one top feature for an iWatch

September is coming round and with it the announcement of new Apple iPhones. This year there is a real chance we may also get to see the iWatch. If it does appear, I have a single must have which I feel it needs to succeed.

I must want it on looks alone. Forget all the features and added functionality it may bring to my life, but if it does not look gorgeous then you will have hard time winning me over. Recent staff hires and reports seem to be suggesting that Apple is hoping to position the iWatch as a luxury product. All Apple Macs and iPhones Except the 5c) of recent years have oozed quality and precision engineering. But to make that move into luxury they have to go further.



Long ago watches lost their primary purpose as just timepieces. I could spend £0.69 for the cheapest watch on Amazon or £55k for the most expensive, and potentially gaudie example. Amazon is not the place most people buy their luxury watches, and you can spend considerably more. By doing so you are not simply wanting to know the time, you are making a statement about you, what you want to be seen as.

So for Apple to make a luxury watch they do not have to compete with the very high financial end but they do have to compete on statement terms. The watch has to be a proud addition to a wrist, it has to stand for sometime, and the user has to want to broadcast their ownership. Existing smart watches have failed in this aspect. Pebble has a small but loyal following but despite the ability to use custom watch face designs, it has never shouted out "wear me with pride".

LG, Sony, Samsung have all failed to excite the market. They may have great technology and Google Now is a very powerful desirable feature, but some - the original Samsung Gear - had very high numbers of customers returning the products. Some of this will be due to functional problems but I suspect a lot are to do with the diminishing desire to actually own the item.

To excite the buyer in a luxury market Apple has to excell at the visual and tactile details. It has to fell like a perfectly made object, with precision details and exquisite visual clarity. It must not be a chunky item, bulked out by the need for a long battery life, it must be elegant and restrained. More like this concept design by Charlie No of New York
Than this ,

The first is about a precision watch, the second is about a digital device. Now Apple has amazing UX and design skills, and may well create a new visual identity which manages to express the later and make it appear luxury and highly desirable. But to do so is much harder than retaining some of the visual language from existing forms. To appeal to the masses it has to not look like a geek only device. Its visual language should speak of quality, luxury, refinement, elegance, not of utility and Swiss Army Knife capability.

I have no doubt that the functionality will blend seamlessly with my other iOS devices. I expect Apple will provide wireless charging and somehow give it a battery that will last at least 48 hours. I expect it will be priced at an expensive but not outlandish cost of between £150 and £200. New capabilities such as health monitoring, fingerprint security, effective and accurate voice activation will all increase the desirability, but if it does not immediately delight my eyes then my wallet will stay in my pocket.






Tuesday, 5 August 2014

Is Google Material the future "stuff" of mobile?



Google has recently launched, if that is the right word, their new unified UI. It is named Material  Design and has made big waves in the UX and design communities. When it arrives on commercial products end users will be able to experience this new approach to user interfaces.


Develop a single underlying system that allows for a unified experience across platforms and device sizes. Mobile precepts are fundamental, but touch, voice, mouse, and keyboard are all first-class input methods.


So consitancy is a core principle, lets come back to this later.

The inspiration, or foundation material is paper and ink:

The material is grounded in tactile reality, inspired by the study of paper and ink, yet technologically advanced and open to imagination and magic.


John Wiley, one of the Principle Designers at Google, gave this insight about how Material Design cameabout:
I'm one of the instigators of material design. It actually came about a couple of years ago when we were working on a design problem involving Google Search. I was looking at mobile results on cards and I asked "what is this made of?" People gave me funny looks, like "what do you mean? It's just pixels." But I didn't think that was a good answer.
When you physically interact with software – actually touching the cards and links and buttons, etc. – you bring a lot of expectations around how physical objects behave. If the interface isn't thoughtful about those expectations – if it's just a bunch of pixels – it leaves you with a rather unsatisfying and inauthentic experience.



The key point is “what is this made of?” . We spend hours staring at and prodding and swiping this stuff but what is it? A good question.

Material Design is the work of a design team, whose heritage and inspiration is the 2.5D world of layers of paper. It is graphic design minimalism applied to digital design and is a direct response to and rejection of the skumorphic design which typified the first few iterations of the Apple mobile IOS. Skumorphisim grew to be seen as an over ornate and decorative form of UI design. A gothic form of UI design. Its attempts to literally ape the physical appearances of the core analog devices indicative of an application’s function became to be viewed as excessive and unnecessarily visually complex.

We need to take a few steps back to understand the initial rational for skumophic design.  Skumorphic design is a visual imitation  of another object by replicating its visual form, shape and functionality. Lets start by looking at the two worlds, that of the being presented to the viewer in one form or an other and the world of experiences, memories and cultural inheritance of the user. As Don Norman putsit :

In the world of design, what matters is:
1.    If the desired controls can be perceived
1.a. In an easy to use design, if they can both readily be perceived and interpreted
  1. If the desired actions can be discovered
    2.a. Whether standard conventions are obeyed

When a design borrows from or copies aspects which make it easier to perceive its function then it helps the user to match their model of how the system will behave to the actual system model. This borrowing form the established norms of a previous medium or cultural experience is not unique to the computer user interface.

Often when a new form or medium appears the creators seek an appropriate reference point or model upon which to build the new way forward. They learn from the past. When Guttenberg created the first movable type he chose a typeface which mimicked the gothic script of hand produced bibles.

Detail from the Guttenberg bible

A page In order to achieve a look which mimicked the hand manuscript his typeface, Blackletter, had over 300 characters to accommodate the many ligatures and flourishes of the then traditional model.  Although his method of production was revolutionary Guttenberg employed the established aesthetic, he was using a new technology to improve the production of books but was sticking with the understood and expected visual appearance. Partly for commercial reasons, but also because that was what he and his audience knew and expected.

The first moving pictures with a constructed narrative were modeled on the experience of watching a theatre performance. The camera was static and contained none of the techniques we now expect, pans, close ups, cuts etc. It was only later that film developed a language of its own.



The revolutionary Windows, Icons, Mouse, Pointer, user interface developed by Xerox took the existing office desktop as its central metaphor. The environment where these new desktop computers were to be used was in a paper based office environment. WIMP desktop UI is still the default model for PCs Laptops etc.

When Apple sought to enter the smartphone market it was clear the WIMP and desktop metaphor was not applicable. They retained the notion of icons and buttons which were touched instead of clicked, but did away with windows and a pointer. The notion of files was now hidden from the user and the limitations of screen size meant windows were infeasible. Instead of using a consistent model across the system screens and all applications, each Apple app was designed to express the functionality it provided by mimicking its analog equivalent. Its hard to remember but making a tablet work for users was an unsolved problem until the iPad came along. Others had failed and, one of the reasons Apple did succeed was to do with making it unlike a computer. Skumorphism was a big element in this.








Skumorphic designs were employed to help users understand the capabilities of the new mobile computing platform. Buttons were given visual clues, highlights and shadows to give a 3D appearance and lift them off the screen. Functional areas which required users to touch were given clear boundaries.


Where there was no real world analogy which could be readily employed, or widely understood by the audience, then a new visual language should have been applied. This was the case in most elements of the actual system level of the iOS UI but not so with some of the Apple apps. While it makes sense to mimic a calculator or a note pad, a feature such as “find my friends” has no real world equivalent.





I would argue that for this new medium a skumorphic approach made good commercial and usability sense. The stuff that the early iOS UIs were made of was, in the case of applications, the stuff the users already knew, and in the case of system level interface items it was buttons, switches and sliders and established concepts such as dialog boxes. What failed was the need to acknowledge that not every application needs to be skumorphic and those without analog equivalents, or where slavish mimicry hindered usability, a different design approach was needed.


A new language for a new medium


 

Google’s Material world is an attempt to move beyond the previous medium and to create a new digital language which is appropriate to the new mobile digital world. The argument is that the user base understands how to interact with mobile technology and the skumorphic approaches of the past are no longer necessary or desirable.

I question the universality of a paper based metaphor, but the looseness of its definition means it should be adaptable to many different circumstances. I am concerned that the dominance of a minimalism aesthetic may result in throw the baby out with the bath water. The affordances and cultural heritage which gave skumorphic designs their validity and could be used to the user's benefit. But they are removed from the new language and users will need to take a greater leap to learn how to work this new stuff. This is always the case as a medium develops and the benefits of a truly digital language should out weight what is lost.

However the primary aim of establishing a consistent experience across different mobile platforms is welcome, but harder to achieve. In terms of the OS level, it is within Google’s power to at least deliver consistency within non-forked versions. When it comes to 3rd party applications the problems are much harder.

Consistency is a cornerstone of user good human computer interaction. It provides users with a stable predictable environment where things happen as they expect. Items are in the same locations. Visual symbols have the same meaning regardless of the application they are used in etc.




Today achieving consistency is much harder than it has ever been. For a while Apple achieved this for their desktop environment. Early 3rd party applications for the first few iterations of the desktop OS, had a somewhat cavalier approach to consistency. Application developers ignored many of the Apple Interface Guidelines. Finding items in menus on some early packages was a pot luck affair. Over time the (much smaller ) development community appreciated the benefits of consisteny over the need to be unique and the consistency of the Apple environment became one of its strongest assets. While there were notable exceptions such as Kai Krause most Apple applications were easy to switch between.

Then the CD-ROM era and subsequently the web created a free for all development environment where anything went. While this was good for experimentation and innovation, consistency went out the window. With millions of developers and no agreed set of guiding principles or unifying force, consistent patterns and behaviors have only been adopted through mimicry, conservatism, and recognition of the benefits of following best practice.

The same is true for mobile applications. Many iOS and Android developers embraced skumorphism, others experimented with new ways of delivering content and functionality within a touch environment. As Apple has found even with its much stricter policing of the distribution process, imposing a consistent UI experience across all apps is very hard.


Google will show case Material Design with Gmail and its core apps. I have no doubt that these will be visually stunning. Some of the biggest names in the app ecosystem will readily adopt it. The appetite for a shift away from skumorphism is widespread in the design community. However, the example of Windows 8’s touch tiled UI demonstrates that while it is visually appreciated by designers, not all applications fulfill its potential. In the case of Material the design guidelines are a beautiful expression of its capabilities, we will have to see if this new material will be the unifying aesthetic of Android experiences.

Thursday, 10 July 2014

Jaguar Land Rover and the self learning car

I saw this today and my initial response was enthusiastic. It is great to see such innovation from Jaguar Land Rover. This is the sort of thing we expect from Google but not a UK automotive company.



There is lots of context aware solutions personalised to the individual and informed by different data sources and sensors. Great, the future is just around the corner, type of stuff. But it highlights a problem with the mass automation of products. Brands will want to develop their own solutions to demonstrate forward thinging and add USPs, (although they in five years they will be a requirement and mainstream). Each tailored to the features of the product but based on information about the individual and their usage patterns. 

This could lead to a world where we have loads of different automated systems offering advices and context aware decisions making. Sometimes we may get multiple products offering similar options to use. Google Now, an automated car, a smart heating system, a tailored diary system etc all acting on our behalf because we are running late due to traffic congestion. 

Obviously each system can be tailored and its ability to act on our behalf will be set by the user. But it seems that there will be so much duplication. Google and Apple are pushing one home automation system - or ecosystem to rule them all, and consequently bind users closer to their brand. They are also offering in car solutions. So far these are in car entertainment and connectivity but there is good reason to expect this "self learning car' approach will be rolled into the offering. 

The cost of making concepts such as the self learning cars a reality is not small and it takes a level of digital expertise many companies do not currently have. The Apple and Google options allow brands to avoid doing the heavy lifting and concentrate on the things they know well and can do best. If jaguar Land Rover put this into production will be be self grown or will they pay their shilling to the big boys? Time will tell but from a user perspective a unified solution has an appeal over a collection of disparate solutions.

Thursday, 12 June 2014

The internet of me


There is an old gag that on leaving the house a man makes the sign of the cross. He is not religious, it stands for: spectacles, testicles, wallet, watch. A quick check to see they have all the essentials items a man needs to be a man.

Today’s version would have to include the phone. It is far more vital to our daily existence than any of the others. In fact increasingly we are now leaving home with a digital ecosystem which defines what it is to be a digital citizen. Each item no longer exists in isolation. Our smartphones are the key, the hub for all our new devices and has the potential to replace at least one of them, the wallet.  This trend will only continue.

As we adopt wearables we move from having a single digital device to a collection of independent but symbiotic internet devices. Our wristbands and smart watches can act as information conduits but also collect data, and feed it back to our smartphone. They will assess our health and wellbeing. Some of the work in deciding what to do with the data will take place on the device but most will be done on our smartphone apps talking to external health databases and where appropriate to our health providers and insurers.

Smart glasses will augment our experiences, again using the smartphone as the conduit to the external data sources. Image recognition, location services, pricing comparison, social communication will all travel through the smartphone hub to our personal internet of devices.

A lot of the time this will result in automatous actions that do not require input from the owner. Where you are, your schedule, your habits, your diary, your needs will be assessed to make decisions on your behalf. Within your own ecosystem of physical devices and the wider personal information stored in countless cloud services rules will run and instructions will be sent out.

While we are still awaiting the impact of the Internet of Things we are starting to see the Internet of Me. A world where devices are networking and making decisions for my benefit. Based on my very personal needs and state. The internet of me means the power of my network of devices is greater than of them individually.  If a new device comes along or I upgrade my health monitoring device then the other devices in the network could have access to this new data and functionality. The combination of the data from more personal sources makes the decisions which can be made more accurate, wider reaching etc.

Google always have, and recently Apple have been opening out their platforms so that third parties can  build solutions that will easily communicate and co-operate. For the time being we have to be in one or the other camp, but device makers will profit from supporting both. The added capabilities and ease of collaboration will determine what is capable on each platform.

Commercially it will open a whole new market, where tomorrow’s Apples and Facebooks will be born. Certain device combinations will hit the sweet spot and deliver killer apps. A man will not be judged by the cut of his suit but the configuration of his internet.

Socially, The Internet of Me will further widen the digital divide. Already just having access to the internet gives an advantage over those without access. If we extend this to the Internet of Me then the individual with their own ecosystem has a definite advantage over one who does not. They may be able to get access to beter healthcare or lower insurance premiums. They can have immediate access to greater amounts of contextual information. They are more likely to be targeted with promotions and offers and make savings.  How will society and governments deal with these digital inequalities, or will they even try?

So what is the new sign of the cross? Smartphone, smartwatch, ARglasses, biometric earphones, and not forgetting testicles. Somethings never change.