R S S : Jeffrey Zeldman Presents: The Daily Report
PageRank : 2 %
Tags: english, radio, show, news, talk, technology, daily, good, books, today, john, wordpress, project, nothing, looks, report,
RSS FEED READER
AT FIRST GLANCE, November 2016 has bigger fish to fry than a small, cult holiday celebrated by web developers and designers.
Each day since November 8, 2016 has brought new, and, to some of us, unimaginable challenges to the surface. Half of America is angry and terrified. The other half is angry and celebrating. At a time like now, of what possible use is an annual holiday celebrated mainly on social media by a tiny posse of standards- and accessibility-oriented web developers and designers?
From Blue Beanies to Black Hats
Many web developers have “moved on” from a progressive-enhancement-focused practice that designs web content and web experiences in such a way as to ensure that they are available to all people, regardless of personal ability or the browser or device they use.
Indeed, with more and more new developers entering the profession each day, it’s safe to say that many have never even heard of progressive enhancement and accessible, standards-based design.
The trouble with this top-down approach is threefold:
Firstly, many new developers will build powerful portfolios by mastering tools whose functioning and implications they may not fully understand. Their work may be inaccessible to people and devices, and they may not know it—or know how to go under the hood and fix it. (It may also be slow and bloated, and they may not know how to fix that either.) The impressive portfolios of these builders of inaccessible sites will get them hired and promoted to positions of power, where they train other developers to use frameworks to build impressive but inaccessible sites.
Only developers who understand and value accessibility, and can write their own code, will bother learning the equally exciting, equally edgy, equally new standards (like CSS Grid Layout) that enable us to design lean, accessible, forward-compatible, future-friendly web experiences. Fewer and fewer will do so.
Thirdly, and as a result of the first and second points, more and more web experiences every day are being created that are simply not accessible to people with disabilities (or with the “wrong” phone or browser or device), and this will increase as standards-focused professionals retire or are phased out of the work force, superseded by frameworkistas.
#a11y is Code for “Love Your Neighbor”
This third point is important because people with disabilities are already under attack, by example of the U.S. president-elect, and as part of of a recent rise in hate crimes perpetrated by a small but vocal fringe. This fringe group of haters has always been with us, but now they are out of the shadows. They are organized and motivated, and to an unmeasured degree, they helped Donald Trump win the White House. Now that he’s there, people of good will ardently hope that he will condemn the worst bigots among his supporters, and fulfill his executive duties on behalf of all the people. I’m not saying I expect him to do this today. I’m saying I hope he does—and meantime it behooves us to find ways to do more than just hope. Ways to make change.
One small thing designers and developers can do is to make accessibility and usability Job 1 on every project. And to take a broad view of what that means. It means taking people’s messy humanity into account and designing for extreme ends of the bell curve, not just following accessibility authoring guidelines. (But it also means following them.)
In doing those things, we can love our neighbors through action. That—and not simply making sure your HTML validates—is what designing with web standards was always about.
On November 30, I will put on my blue hat and renew my commitment to that cause. Please join me.
AT HOME, sick with a cold and bored, my daughter buys a single packet of “My School Dance” in a freemium iTunes game. The manufacturer charges her (well, charges me) for ten packets. This same “accidental” 10x overcharge happens across three different games by the same manufacturer in the span of about an hour.
American Express notifies me of the spurious charges, but won’t let me dispute them until they are “posted.” I spend half an hour on the phone with a very nice gentleman at Amex learning this. Why would Amex notify customers about a charge days before they can do anything to resolve it? I don’t know. And I don’t ask the gentleman on the phone. His job is hard enough.
A few days pass. Amex “posts” the false charges and emails me with a link to resolve the problem on Amex’s “dispute a charge” web service.
Amex’s “dispute a charge” web service “encounters an error” when I try to use it to resolve the problem.
This happens every time I try. I try for three days.
So I call Amex, but I can’t resolve the problem because I don’t have the card in my wallet.
So I head to iTunes, where I should have gone in the first place, and click through two or three generations of iTunes “Report a Problem” interfaces: visually different generations of iTunes software, with different user paths, all still being served by Apple. Generations of iTunes software that, when they fail, link to other generations of iTunes software, which also fail.
I click and click my way through five years of iTunes interfaces.
Finally I find an iTunes page where I can manually “Report a problem” for each of the 27 false charges. (Three of the charges, remember, were legitimate. I’m willing to pay for the three items my daughter intended to buy. But not 30.)
If one software product overcharges your kid by a multiple of 10, that could be a software bug. When three products from the same manufacturer all do it, that’s not a bug, it’s a deliberate attempt to defraud families, by overcharging on purpose and hiding behind the opacity of iTunes’s purchase reporting. Simply put, the manufacturer is dishonest, and figures iTunes’s support section is impenetrable enough that you’ll eventually give up trying to get a refund.
But they didn’t count on my tenacity. I’m the Indiana Jones of this motherfucker. I have studied maps and bribed natives and found my way to the hidden iTunes refund page that actually, sometimes, works.
On this page, I inform Apple of the fraud 27 times, in 27 different boxes. Each time, after reporting, I click a blue button, which generally returns an error message that iTunes was unable to process my request. So I enter the data and click the button again. It’s only 27 boxes of shit. I’ve got all the time in the world.
The page tells me that only two refunds went through. Every other request ends with an error message saying iTunes could not process my request, and encouraging me to try again later.
Instead, I leave the page open, and, about ten minutes later, I manually reload it. When I do so, the display updates—I guess this generation of iTunes software preceded “Ajax”—and I learn that most of my refunds have gone through.
So the software actually works about 33% of the time, even though it indicates that it only works 5% of the time. Remember that wait-ten-minutes-then-randomly-reload-to-see-if-anything-changed trick. It’s the sign of excellently designed consumer software.
I’ve put over two hours of my time into this. Going on billable hours, I’ve probably lost money, even if I get all my overcharges refunded. But there’s a principle here. Several principles, actually. Tricking kids is wrong. Stealing is wrong. Building a beautiful front-end but neglecting customer service is wrong. Mainly, I’ve just had enough of 2016’s bullshit.
Fuck you, 2016.
Jeremy Keith was the seventh speaker at An Event Apart San Francisco this month. His presentation, Evaluating Technology, set out to help us evaluate tools and technologies in a way that best benefits the people who use the websites we design and develop. We looked at some of the hottest new web technologies, like service workers and web components, and dug deep beneath the hype to find out whether they will really change life on the web for the better.
Days of future past
Its easy to be overwhelmed by all the change happening in web design and development. Things make more sense when we apply an appropriate perspective. Although his presentation often dealt with “bleeding-edge” technologies (i.e. technologies that are still being figured out and just beginning to be supported in some browsers and devices), Jeremy’s framing perspective was that of the history of computer science—a field, pioneered by women, that evolved rationally.
Extracting the unchanging design principles that gave rise to the advances in computer science, Jeremy showed how the web evolved from these same principles, and how the seemingly dizzying barrage of changes taking place in web design and development today could be understood through these principles as well—providing a healthy means to decide which technologies benefit human beings, and which may be discarded or at least de-prioritized by busy designer/developers working to stay ahead of the curve.
Resistance to change
“Humans are allergic to change,” computer science pioneer Grace Hopper famously said. Jeremy showed how that very fear of change manifested itself in the changes human beings accept: we have 60 seconds in a minute and 24 hours in a day because of counting systems first developed five thousand years ago. Likewise, we have widespread acceptance of HTML in large part because its creator, Tim Berners-Lee, based it on a subset of elements familiar from an already accepted markup language, SGML.
How well does it fail?
In our evaluating process, Jeremy argued, we should not only concern ourselves with how well a technology works, but also how well it fails. When XHTML 2.0 pages contained an error, the browser was instructed not to skip that error but to shut down completely. Thus, XHTML 2.0 was impractical and did not catch on. In contrast, when an HTML page contains an error or new element, the browser skips what it does not understand and renders the page. This allows us to add new elements to HTML over time, with no fear that browsers will choke on what they don’t understand. This fact alone helps account for the extraordinary success of HTML over the past 25 years.
Likewise, service workers, a powerful new technology that extends our work even when devices are offline, fails well, because it is progressively enhanced by design. If a device or browser does not support service workers, the content still renders.
On progressive web apps
Likewise, progressive web apps, when designed responsively and with progressive enhancement, are a tremendously exciting web development. But when they are designed the wrong way, they fail poorly, making them a step backward for the web.
Jeremy used the example of The Washington Post’s Progressive Web App, which has been much touted by Google, who are a driving force behind the movement for progressive web apps. A true progressive web app works for everyone. But The Washington Post’s progressive web app demands that you open it in your phone. This kind of retrograde door-slam is like the days when we told people they must use Flash, or must use a certain browser or platform, to view our work. This makes it the antithesis of progressive.
Dancing about architecture
There was much, much more to Jeremy’s talk—one of the shortest hours I’ve ever lived through, as 100 years of wisdom was applied to a dizzying array of technologies. Summarizing it here is like trying to describe the birth of your child in five words or less. Fortunately, you can see Jeremy give this presentation for yourself at several upcoming An Event Apart conference shows in 2017.
The next AEA event, An Event Apart St. Louis, takes place January 30-February 1, 2017. Tomorrow I’ll be back with more takeaways from another AEA San Francisco 2016 speaker.
The post State of the Web: Evaluating Technology | Jeremy Keith appeared first on Zeldman on Web & Interaction Design.
12 LESSONS from An Event Apart San Francisco – № 5: Val Head was the 9th speaker at An Event Apart San Francisco last week. Her session, Motion In Design Systems: Animation, Style Guides, and the Design Process, led us through everything designers and developers need to make web animation work for our whole team.
Val covered guidelines for designing animation that fits your brand, making animation part of your design process, and documenting your animation decisions in your style guide for future use.
It takes a village
Animation works best when the whole team plans for it. If it’s simply a wish—say on the part of the designer—everyone in the chain will be too busy with higher priority tasks, and the animation won’t get made.
Which is a pity, because well-considered animations (such as Val showed) can make interactions much easier to understand. Additionally, if choreographed by the entire team as part of a bigger picture, animations can reinforce your brand. (Done without consideration, and without the support of the entire team, they’re more likely to contradict important brand attributes.)
Better animation requires good communication, comprised of…
- Shared vocabulary
- Established animation values
- Documentation and repeatability
Deliverables – the things that start conversations
The first deliverables for animation are conversation starters: storyboards and sketches that help the team envision where there is potential for animation in their user flow, see how an animation could make the screen easier for users to understand, and begin to plan how to animate between screens. Best of all, anyone can create a sketch or storyboard: artistic talent is not required (these are not Pixar animations but simple conveyors of ideas).
In every storyboard, we should draw or describe a trigger (what starts the action?), an action (what takes place?), and a quality (how does it happen?).
Motion comps and interactive prototypes
Motion comps answer questions about how the animations should look, move, and behave, and allow for quick iteration. When handing them off to the development team, it’s important to include the duration and delay values; details of the easing used; repeat values, and iteration counts.
Define and document – save future you time and effort
Interface animations are most effective when they work in concert as part of the bigger picture. Designing and choreographing your web animation efforts from the top down leads to more effective animations that integrate into your design system. And, defining a motion language for your brand can help your team to develop a shared vision from which to work.
Don’t just create animations—define and document them. Define your brand in motion with the same care you take for your logo, style guide, and pattern libraries. Use design principles to inform motion decisions. Study Brand Pillars, Voice & Tone, and Experience Pillars, and build your animation guidelines from there. Animations are best when they’re brand-appropriate and repeatable.
Get input from everyone
Having brought us through the rationale for animations and a variety of potential workflows, Val took us deeply into the details that make for effective animations, and ended with a game plan enabling everyone on the team to become an undercover animation superhero.
Tomorrow I’ll be back with more top takeaways from another AEA San Francisco 2016 speaker. The next AEA event, An Event Apart St. Louis, takes place January 30-February 1, 2017
See also: 4 tools for designing better UI animation by Val Head.
12 LESSONS from An Event Apart San Francisco – № 4: Jason Grigsby was the 10th speaker at An Event Apart San Francisco last week. Jason’s session, Adapting to Input, presented designers and developers with a conundrum many of us hadn’t yet considered when designing for our new spectrum of web-capable devices.
Responsive web design forced us to accept that we don’t know the size of our canvas, and we’ve learned to embrace the squishiness of the web. Well, input, it turns out, is every bit as challenging as screen size! We have tablets with keyboards, laptops that become tablets, laptops with touch screens, phones with physical keyboards, and even phones that become desktop computers. What’s a design mother to do?
During his session, Jason guided us through the input landscape, showing us new forms of input (such as sensors and voice control) and sharing new lessons about old input standbys. We learned the design principles needed to build websites that respond and adapt to whichever inputs people choose to use.
Four truths about web inputs
Jason began by sharing four truths about input in 2016:
- Input is exploding — The last decade has seen everything from accelerometers to GPS to 3D touch.
- Input is a continuum — Phones have keyboards and cursors; desktop computers have touchscreens.
- Input is undetectable — Browser detection of touch‚ and nearly every other input type, is unreliable.
- Input is transient — Knowing what input someone uses one moment tells you little about what will be used next.
A Golden Rule of Inputs
Just as many of us screwed up our early approach to multi-device design by consigning the “mobile web” to a non-existent “mobile context,” we now risk making a similar blunder by believing that certain tasks are “only for the keyboard”—forgetting that by choice or of necessity, the people who engage with our websites use a variety of devices, and our work must be available to them all.
One of my principal takeaways from Jason’s presentation was that every desktop design must go “finger-friendly.” Or, as Josh Clark put it back in 2012, “When any desktop machine could have a touch interface, we have to proceed as if they all do.”
For more illuminations on input, read Jason Grigsby’s “Adapting to Input” in A List Apart, and check out these amazing demos and articles:
- Hololens Gestures
- Leap Motion Oculus Rift tour
- Microsoft Research Pre-Touch Sensing for Mobile Interaction
- You cannot reliably detect a touch screen
- Interactive touch laptop experiments
- New Rule: Every Desktop Design Has To Go Finger-Friendly
- jQuery Pointer Events Polyfill
- Pointing the Way Forward
- Warby Parker Gyroscope Example
- Lightsaber Escape Gyroscope Example
- Generic Sensor API Draft
- Autofill: What web devs should know, but don’t
- Payment Request API
- Web Cam Toy
- HTML Media Capture and getUserMedia
- Web Speech API Demonstration
- Web Speech API Translation Demonstration
- Web Bluetooth
- Physical Web
- One amazing video that shows the potential of the physical web
- Open Device Labs
- Four Truths About Input
Tomorrow I’ll be back with another top takeaway from another AEA San Francisco 2016 speaker. The next AEA event, An Event Apart St. Louis, takes place January 30-February 1, 2017 in the shadow of Mr Sarrinenen’s fabulous arch. See you there!
I SPENT yesterday with my Swedish friend Pär (“Peyo”) Almqvist, who returned from LA Sunday morning and headed home to Sweden Sunday night. We met in Stockholm in 1999, when Peyo was 19, and have been close ever since. In 2000, Peyo wrote “Fragments of Time” for A List Apart. Reading it, you can see how thoughtful he is as a creative person.
A few years ago, Peyo cofounded OMC Power, a start-up that brought affordable solar power to rural villages in India—profoundly poor villages where, until that time, folks had relied on dirty gasoline-powered generators to get what little electricity they could.
National Geographic TV covered OMC’s work just this week in their special, “Years of Living Dangerously;” in the video clip on their site, you can watch David Letterman interview one of Peyo’s co-founders about what they’ve accomplished so far, and why it matters.
Letterman went to India to cover the threat of climate change and what’s being done to fight it. OMC Power is providing clean energy and a model for India to electrify itself without adding to the pollution that contributes to climate change. OMC started in India because folks in India needed the power and therefore welcomed them; and also because, by working with small rural villages, they encountered less violent opposition from the oil companies than they would have if they had attempted the experiment in Europe or North America. When the power grid fails in the west, folks in India will still have power—an irony of developed nations’ dependence on dirty fossil fuels.
In a time when so many of us feel helpless about climate change, and others, at the behest of corporate masters, cynically deny that it exists, it is good to know people who are making a difference and earning a living in doing so.
While Peyo remains an advisor to OMC Power, he has since co-founded a music startup, which I can’t talk about yet, but which I believe will meet real a need in music and may even change how some music gets made. (Like me, Peyo has a musical background, although, unlike me, as a producer and composer he has had hits in Sweden.)
It was his new music start-up music business that brought Peyo to New York and LA during the past week. I missed the chance to spend the week with him as I was in San Francisco doing the final AEA conference of 2016. It was great to spend a day together in New York, talking about our families, our businesses, and the world.
12 LESSONS from An Event Apart San Francisco – № 3: Derek Featherstone was the 10th speaker at An Event Apart San Francisco, which ended Wednesday. His session, Extreme Design, showed how creating great experiences for people with disabilities results in better designs for everyone.
Focusing relentlessly on accessibility helps us think of extreme scenarios and ask questions like “how can we make this work eyes free?” and “how can we make this work with the least amount of typing?” Most importantly, it leads to deeper design thinking that solves problems for everyone who uses our sites and products.
A Map For The Blind
One of my favorite examples from Derek’s presentation had to do with a map. A Canadian city was expanding geographically to encompass some of the surrounding suburbs. The city’s website was charged with letting all citizens know about the change. The web team did what you or I would probably do: they created a map that clearly showed the old and new city limits.
Unfortunately, this visual map was by definition inaccessible to blind citizens, so the city brought in Derek and his colleagues to design an equivalent experience for the unsighted. Derek’s team and the web team pondered typical solutions—such as laborious written descriptions of the city’s shifting geographic borders. But these were not user-friendly, nor did they get to the heart of the problem.
Maybe creating a verbal equivalent of a visual map wasn’t the answer. Derek’s team kept digging. Why was the map created in the first place, they asked. What was the point of it? What were users supposed to take away from it?
It turned out, people wanted to know if their street fell within the new city boundaries because, if it did, then their taxes were going to go up.
Solving for a map wasn’t the point at all. Allowing people to find out if their home address fell inside the new city limits was the point.
A simple data entry form accomplished the task, and was by definition accessible to all users. It was also a much quicker way even for sighted user to get the information they wanted. By solving for an extreme case—people who can’t see this map—the web teams were able to create a design that worked better for everyone.
Tomorrow I’ll be back with another top takeaway from another AEA San Francisco 2016 speaker. The next AEA event, An Event Apart St. Louis, takes place January 30-February 1, 2017.
The post Solve the Right Problem: Derek Featherstone on designing for extremes appeared first on Zeldman on Web & Interaction Design.
12 LESSONS from An Event Apart San Francisco – № 2: Eric Meyer was the 11th speaker at An Event Apart San Francisco, which ended Wednesday. His session, Compassionate Design, discussed the pain that can occur when our carefully crafted websites and applications, designed to create an ideal experience for idealized users, instead collide with messy human reality.
You can’t always predict who will use your products, or what emotional state they’ll be in when they do. A case in point: when Facebook’s “Your Year in Review” feature, designed by well-meaning people to help Facebook users celebrate their most important memories from the preceding twelve months, shoved a portrait of Eric’s recently deceased daughter Rebecca in his face, surrounded by dancing and partying clip-art characters who appeared to be celebrating her death.
With great power…
Certainly, no one at Facebook intended to throw a hundred pound bag of salt into the open wound of a grieving parent. What happened, surely, was that no one sitting around the table when the feature was planned asked the question, what if one of our users just had the worst year of their lives?
If even one of the talented Facebook folks charged with creating the new feature had asked themselves “what’s the worst that can happen?”—if just one of them had realized that not everyone using Facebook felt like celebrating their year—they might have put in safeguards to prevent their algorithm from assuming that a Facebook user’s most visited (most “popular”) post of the year was also their happiest.
They might also have made the “year in review” feature an opt-in, with questions designed to protect those who had experienced recent tragedy. Facebook didn’t build in those protections, not because they don’t care, but because our approach to design is fundamentally flawed, in that we build our assumptions around idealized and average users and use cases, and neglect to ask ourselves and our teammates, “what if we’re wrong? How could our product hurt someone?”
It’s not just Facebook. We all ignore the user in crisis.
Eric shared many examples from leading sites and services of unintended and sometimes horrifying instances of designs that hurt someone—from ads that accidentally commented sadistically on tragic news stories (because keyword exclusion is underrated and underused in online advertising); to magic keywords Flickr and Google added to their customers’ photos without asking, resulting in a man’s portrait being labeled “gorilla” and a concentration camp photo being tagged a jungle gym.
The problem, Eric explained, is that our systems have not been designed with people in mind. They’ve been designed with consumers in mind. Consumers are manageable fictions. But human life is inherently messy. To create sites and applications that work for everyone, including people who may be having the worst day of their lives at the time they encounter or product or service, we must always think about how our product could be used to hurt someone, and plan for the worst-case scenario whenever we design.
When we label a usage an “edge case,” we marginalize that user and choose not to care. Think “stress case,” instead, and design for that human.
We can do better.
Eric’s presentation included many techniques for bringing these new principles into our design workflows, and his book with Sara Wachter-Boettcher, Design for Real Life, goes into even greater detail on the matter. (It’s one of those rare and important books that defines how we should be looking at our design jobs today, and I would say that even if I weren’t the publisher.)
Tomorrow I’ll be back with another top takeaway from another AEA San Francisco 2016 speaker. The next AEA event, An Event Apart St. Louis, takes place January 30-February 1, 2017.
The post Identify “stress cases” and design with compassion: Eric Meyer appeared first on Zeldman on Web & Interaction Design.
12 LESSONS from An Event Apart San Francisco – № 1: Gerry McGovern was the 12th speaker at An Event Apart San Francisco, which ended yesterday. His session Top Task Management: Making it Easier to Prioritize tackled the firehose of content and interactions web and interaction designers and developers are called upon to support.
Gerry shared example after example of cases where most of this stuff didn’t matter at all to the person using the site or service, and drew the commonsense—but too rare in the corporate world—conclusion that if we spend our time making stuff that matters to our organization instead of stuff that matters to our customer, we will lose our customer. (“Nobody reads your annual report.”)
One of my favorite takeaways from Gerry’s session was about performance, but not in the way you probably think. Gerry pointed out that, in organizations, we are always measuring our own performance: how quickly did we turn that project around? Did we launch on time? Instead of dressing up our navel gazing with analytics that are about our tasks, we should measure our customers’ speed. How quickly do our sites and products help our customers achieve their goals? How can we identify and remove additional obstacles to completion, so our customers achieve their goals faster and faster?
We need to manage speed on the page, not just the speed of the page load. Manage the customer’s time on task. We won’t become customer-centric until we change our metrics—focusing on customers’ time to complete tasks, not on internal speed, and not just on the mechanical speed of page load—although page load speed (and perceived page load speed) are also terribly important, of course, and are part of improving the customer’s time to complete their task.
“If you solve the customer’s problem, they’ll solve your problem.” When you understand your customer’s top task, and focus relentlessly on helping them achieve it, you build a relationship that works for organization and customer alike.
Tomorrow I’ll be back with another top takeaway from another AEA San Francisco 2016 speaker. The next AEA event, An Event Apart St. Louis, takes place January 30-February 1, 2017.
The post Measure Customer Time, Not Organization Time: Gerry McGovern appeared first on Zeldman on Web & Interaction Design.
MESMERIZED as we have been by the spectacle of the flaming garbage scow of U.S. election news, it would have been easy to miss this other narrative. But in the past few days, just as Google, AT&T, and Time-Warner were poised to turn the phrase “online privacy” into a George Carlin punchline, in marched an unlikely hero to stop them: the American Federal Government. Who have just…
approved broad new privacy rules on Thursday that prevent companies like AT&T and Comcast from collecting and giving out digital information about individuals — such as the websites they visited and the apps they used — in a move that creates landmark protections for internet users.
— Broadband Providers Will Need Permission to Collect Private Data, by Cecilia Kang, The New York Times, Oct. 27, 2016
Given the increasingly deep bonds between corporate overlords and elected officials, this strong assertion of citizens’ right to privacy comes as something of a surprise. It’s especially startling given the way things had been going.
On Friday, Oct. 21, shortly before a massive DDOS attack took out most U.S. websites (but that’s another story), ProPublica reported that Google had quietly demolished its longstanding wall between anonymous online ad tracking and user’s names. I quote ProPublica’s reporting at length because the details matter:
When Google bought the advertising network DoubleClick in 2007, Google founder Sergey Brin said that privacy would be the company’s “number one priority when we contemplate new kinds of advertising products.”
And, for nearly a decade, Google did in fact keep DoubleClick’s massive database of web-browsing records separate by default from the names and other personally identifiable information Google has collected from Gmail and its other login accounts.
The change is enabled by default for new Google accounts. Existing users were prompted to opt-in to the change this summer.
The practical result of the change is that the DoubleClick ads that follow people around on the web may now be customized to them based on your name and other information Google knows about you. It also means that Google could now, if it wished to, build a complete portrait of a user by name, based on everything they write in email, every website they visit and the searches they conduct.
The move is a sea change for Google and a further blow to the online ad industry’s longstanding contention that web tracking is mostly anonymous.
—Google Has Quietly Dropped Ban on Personally Identifiable Web Tracking, by Julia Angwin, ProPublica, Oct. 21, 2016
Et tu, Google
Google has long portrayed itself as one of the good guys, and in many ways it continues to be that. I can’t think of any other insanely powerful mega-corporation that works so hard to advocate web accessibility and performance—although one of its recipes for improved web performance, making up a whole new proprietary markup language and then using its search engine dominance to favor sites that use that language and, of necessity, host their content on Google servers over sites that use standard HTML and host their own content, is hardly a white hat move. But that, too, is another story.
On privacy, certainly, Google had shown ethics and restraint. Which is why their apparent decision to say, “f–– it, everyone else is doing it, let’s stop anonymizing the data we share” came as such an unpleasant shock. And that sense of shock does not even take into account how many hundreds of millions of humans were slated to lose their privacy thanks to Google’s decision. Or just how momentous this change of heart is, given Google’s control and knowledge of our searches, our browsing history, and the contents and correspondents of our email.
Scant days after ProPublica broke the Google story, as a highlight of the proposed merger of AT&T and Time-Warner, came the delightful scenario of TV commercials customized just for you, based on combined knowledge of your web using and TV viewing habits. And while some humans might see it as creepy or even dangerous that the TV they’re watching with their family knows what they were up to on the internet last night, from an advertiser’s point of view the idea made $en$e:
Advertisers want … to combine the data intensity of internet advertising with the clear value and ability to change peoples’ perceptions that you get with a television ad. If you believe in a future where the very, very fine targeting of households or individuals with specific messaging makes economic sense to do at scale, what this merger does is enable that by making more audience available to target in that way.
—Individualized Ads on TV Could Be One Result of AT&T-Time Warner Merger by Sapna Maheshwari, The New York Times, Oct. 26
An unlikely privacy advocate
Into this impending privacy hellscape marched the U.S. Government:
Federal officials approved broad new privacy rules on Thursday that prevent companies like AT&T and Comcast from collecting and giving out digital information about individuals — such as the websites they visited and the apps they used — in a move that creates landmark protections for internet users. …
The new rules require broadband providers to obtain permission from subscribers to gather and give out data on their web browsing, app use, location and financial information. Currently, broadband providers can track users unless those individuals tell them to stop.
The passage of the rules deal a blow to telecommunications and cable companies like AT&T and Comcast, which rely on such user data to serve sophisticated targeted advertising. The fallout may affect AT&T’s $85.4 billion bid for Time Warner, which was announced last week, because one of the stated ambitions of the blockbuster deal was to combine resources to move more forcefully into targeted advertising.
—Broadband Providers Will Need Permission to Collect Private Data, by Cecilia Kang, The New York Times, Oct. 27
What happens next
The consequences of these new rules—exactly how advertising will change and networks will comply, the effect on these businesses and those that depend on them (i.e. newspapers), how Google in particular will be effected, who will cheat, who will counter-sue the government, and so on—remain to be seen. But, for the moment, we’re about to have a bit more online privacy and anonymity, not less. At least, more online privacy from advertisers. The government, one assumes, will continue to monitor every little thing we do online.
The post Private Parts: unlikely advocate fights for online privacy, anonymity appeared first on Zeldman on Web & Interaction Design.
0 | 10