Internet + SEO

Google – spellcheck guys

Hey Google,

Don’t know if you’re listening, assuming you are – you might want to spellcheck your Adwords homepage.

Small but very noticeable spelling error on it, see below: – Language, not langauge.

Got there by clicking on the CPC after Googling ‘Google adwords’ FYI. Link.

Cheers Google Team.

Wrapped up in our own bubbles – part 1

There are several big picture issues floating around on the internet at the moment all of them revolving around filters, personalisation, social conversations, social media, anonymity and privacy vs conversations specifically tied to your real life and the real you. I’m going to try to tackle these issues/discussions one by one and hopefully finish up with a few summarised thoughts, potential implications and lots of questions worth mulling over in the coming weeks, months and years. The issues are too big for just one post however, so consider this to be part 1.

The first issue to tackle is the discussion around filters and personalisation. Eli Pariser, the former Executive Director of, argues that more and more individuals and companies are wrapping themselves in ‘filtered bubbles’ of information, and that this is ultimately a bad thing. A ‘filter bubble’, is “A filter bubble is a concept developed by Internet activist Eli Pariser to describe a phenomenon in which search queries on sites such as Google or Facebook or Yahoo selectively guess what information a user would like to see based on the user’s past search history and, as a result, searches tend to play back information which agrees with the user’s past viewpoint. Accordingly, users get less exposure to conflicting viewpoints. And according to Pariser, the filter bubble is “that personal ecosystem of information that’s been catered by these algorithms” which, based on past choices, reflect a person’s existing viewpoint.” Source.

I strongly encourage you to watch the video below as Eli draws out, but stops short of describing, some of the impacts of a filter bubble, and what that means for individuals, groups, collectives, societies and nations.

What Eli touches on, but doesn’t go into a great deal of detail about is what this means in the medium to long-term and about the implications of companies. political and social groups trying to communicate to a broader audience. In an era of ‘over-personalisation’ where your future search and online results are influenced by past decisions it becomes easier to be convinced that you are right, because Search Engines – both algorithmic and social – deliver what you want to hear, read and see, and are influenced signficantly based on what you’ve previously asked for. How does an individual grow sufficiently to take in broad opinion if the search results they receive cater to what they know, not what they don’t know? Over time, the delivery of this personalised information leads to more like-minded searches, which in turn deliver more like-minded results. Breaking out of that cycle could be very difficult – how do you for example tell algorithmic search engines that you want to be challenged, if you don’t know what some of the alternatives are?

The challenge for business should be plainly obvious: How do you talk to the unconverted where they aren’t getting the information they might need because years of search history indicates they don’t know and an algorithm doesn’t isn’t programmed to recognise that they might need it? How do you get to individuals that should be looking at you and considering your services but where you might be prevented from delivering that information to new groups because a search engine, or personalisation algorithms have decided that your message isn’t relevant, even if it is. In an online environment a future challenge (to start working on solving now) is how to get your message to those that aren’t already singing from the same hymn sheet.

This may not be a massive issue right now because the internet is still relatively young, and personalisation more so, but consider the current generation growing up now – next generations students, consumers and leaders – how will their lives be affected, and how will businesses talk to individuals who have never known any different?

Perhaps Donald Rumsfeld said it best when he posited this:
“There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.”

On to part 2.

Rebecca Black’s “Friday” is just about to pass 150 million views

Love it or hate it, Rebecca Black’s ‘Friday’ music video is just a few thousand views short of cracking 150 million views. (UPDATE: Within six hours of writing this post the video received the 36,000 views required, at the time of writing, to pass 150 million views)

For a video that cost $2000 to make, and has some of the blandest least provocative lyrics known to human existence that’s just incredible.

What’s more incredible about the popularity of this song is what it doesn’t talk about. “Friday” doesn’t reference sex, violence, drugs or drug taking, making money, stealing, crime, politics, inspire nationalism, or coming from a broken home and doing it tough. It is possibly the most innocuous song that has ever been written and yet has been this popular.

Just amazing what gets shared to such extreme volumes sometimes, and it’s worth standing back for a minute and thinking about that.

Also, worth mentioning that Rebecca Black has said she is donating all advertising revenues to her school and a Japanese Earthquake Relief fund, pretty humbling really.

Addendum: The lyrics aren’t that bad objectively speaking. The lyrics arguably are no more or less inspired than Daft Punk’s ‘Around the world’ for example. The title of the song also happens to be the only three words sung in the entire video.

The Guardian deserves recognition for this

The Guardian website team deserve a special round of applause for doing something truly inspired, and something I’m not sure any other company anywhere has done / did during the royal wedding.

The Guardian cleverly tapped into the crowd of Britons that:
– are Republicans
– really don’t care about the Royal Wedding
– have read all the vox-pop’s they can of regular people being ‘excited’, ‘having a great time’, and ‘enjoying the atmosphere’
– already know that no one knows where the Duke and Duchess of Cambridge will be honeymooning
– are tired of the endless speculation of who designed the wedding dress

So what did the Guardian web team do that was particularly creative? They effectively have two homepages running at all times, and it is as simple as clicking a button to get to either version.

With Wedding Coverage

Without Wedding Coverage

Clever stuff from the Guardian team.

Yahoo labs discovers what a ‘like’ is worth

Likes and retweets have become big business, and front of mind for many of the world’s top publishers, marketers, commnunity managers and digital product managers. But what exactly is a ‘like’ worth? How many page views do you get? When does a like transfer to a revenue earning click? What does it mean to digital product managers/marketing managers, advertisers and publishers in real terms?

Yury Lifshits from Yahoo labs, has produced a fantastic Vimeo video (below) displaying his extraordinary, and perhaps even unprecedented, workand research into the life of a ‘like’.

Watch the video now, or return after reading some key bullet-pointed findings from the video, below.

The Like Log Study from Yury Lifshits on Vimeo.

  • Stories about Facebook, Apple, Verizon, Groupon, future and infographics are universally popular across technology blogs. Articles about Microsoft, Amazon, Samsung, cloud computing, TV and search see much less engagement.
  • There are around 10 likes per 1000 pageviews (across several websites with public PV numbers). Decay of engagement is extremely sharp, with less than 20% likes happening after the first 24 hours.
  • Lifespan of a story getting likes or retweets is very short. Yury discovered that even the website Engadget, where tech stories, which are more popular prevail – over 80% of likes occured within the first two days. See the image on the right for the average life cycle of a news article via social media likes and retweets.
  • Among the top stories only four are fact-based political news and three are about celebrities. The most common type of hit stories is opinion/analysis. Other common themes include: lifestyle, photo galleries, interactives, humor and odd news.
  • What’s popular depends on the publication, the audience, and naturally enough, what those readers deem to be shareable. Yury discovered that stories about Barach Obama or Wikileaks was particularly popular from, but stories about Google and China are less popular.

Yury also revealed an absolutely amazing statistic, (emphasis mine):

“At this stage, the average reader of the New York Times (website) only shares one story per year“. Yep, you read that right, the average reader only shares one story PER YEAR. Only one, every 365 days. That’s incredible.

That means that the NYTimes has a lot of readers, and more importantly, if they could improve that statistic by as little as 10 or 20 percent they would see an absolutely huge increase in traffic on their website, which would be a boon for their revenue.

Yury goes on to draw five conclusions, and the full five are available at the bottom of this page.

I’ve extracted the two that product managers, advertisers and editorial teams need to consider the most:

  • Improve promotion of your best content. According to our measurements, web stories are practically lost 24 hours after publications. Only 20% likes are coming after the first day. This engagement pattern discourages production of “big stories”. To get maximum return on your hits, change your frontpage policy. Best stories should be highly visible. Consider hits-only RSS and twitter feeds, month-in-review / year-in-review programs. TechCrunch Classics is another example of hit promotion. And internal efforts are not enough. Breakout success comes when other media (top TV networks, newspapers and magazines) are picking your story and link back to it.
  • Improve your median story. Sort all your stories by engagement and pick a story right in the middle of the list. This is called a “median”. A median story has less than 50 likes for majority of websites in our study. In other words, every second story takes more effort from a writer than it brings value to the readers. Recently leaked “The AOL Way” reports their median story to have only 1500 pageviews, and they aim to grow it four times. Publishers should ask themselves: Why do we write so many weak stories?


There is only one suggestion of Yury’s that I would offer a slightly different perspective on and this is his suggestion and question about producing content that doesn’t work as well, or isn’t as popular. If a publisher only produces content they they know works well, and they know their audience specifically tells them they like now, then only producing that content could box-in that publisher in the future. Further, that content will only serve to reinforce the current consumer behaviour, and will only attract like-minded individuals, thus potentially limiting growth and also risking becoming typecast as a specialist provider of a specific type and genre of content.

Publications need to have a broader range of content for a few reasons:

  • You need broader supporting information as a meta-SEO-framework for ongoing online growth
  • A bigger net catches more fish – and by saying that I mean that even if lesser stories don’t always do so well, the publication doesn’t become known only for doing a limited range of content, and may in-fact initially draw a new user in via the content that isn’t as ‘liked’ by their regular audience. This traffic, in volume, matters significantly over time to archive value and size and referencing potential
  • Dropping less ‘important’ content may be a viable thought-strategy for some businesses, but in the long run, and particularly in technology blogging and publishing we often see that trends quickly shift. If Facebook reporting suddenly became less popular and Tablet reporting rose in popularity does that mean you drop all of it? Not at all, you may pull back a little on the throttle for a while, but that content is still valuable because worldwide there will still be a thirst for that information from niche volumes of readers.
  • Just because it’s not being shared socially, doesn’t mean it isn’t getting read and isn’t useful.

Publishers, industry analysts and those interest in the inner workings of social media should absolutely take note of these findings. A great many questions unfold as a result of reading this research.

Is content still king?

Is content still king?

Yes, but it’s no longer a simple question of just having ‘content’.

The internet, and Google in particular, is moving on from simply ‘content is king’ to ‘great content is king‘. The recent announcement on Matt Cutts blog should be heralded by users and genuinely ‘good’ websites and media companies as a great advancement in internet search.

So what’s going on? In short, Google is changing the ways in which is sources which content should be delivered as a search result when individuals search for everything from ‘baby clothes’ to ‘hotels in Vietnam with a view’ to ‘vintage motorcycles’.

The recent change “noticeably impacts 11.8%” of Google queries, according to Matt Cutts blog, and “is designed to reduce rankings for low-quality sites—sites which are low-value add for users, copy content from other websites or sites that are just not very useful. At the same time, it will provide better rankings for high-quality sites—sites with original content and information such as research, in-depth reports, thoughtful analysis and so on.”

Back in 2009, Randfish wrote a blog piece titled “Terrible SEO Advice: Focus on Users, Not Engines” – and at the time he might have been right, in that the search engine algorithms were easier to deliver to and with just the  right amount of backlinks, and pages of content peppered with just the right amount of specific keywords and metadata, including exact match and close match keywords, as well as short and long tail keyword matching and to get search engines to deliver results that seemed to be more relevent to the user. This, coupled with cleverly optimised metadata that hit all the right algorithm buttons, was often enough to get specific content pages to the top of Google and therefore in front of the eyes of millions.

Randfish, has been at pains to ensure he isn’t misunderstood: he’s not suggesting that SEO should ignore the user, but he does highlight the sheer volume of work required to optimise for both users and search engines:

“I want to issue an apology for that now and set the record straight – SEO is a task that requires paying close attention to the needs of both users and engines. You can’t be an effective SEO without it.

Just think of all the specific tasks we perform that we’d never do if it weren’t for search engines:

  • Title tags: We might still make them, but agonize over keyword usage and positioning, uniqueness and flow? I doubt it.
  • Meta tags: Nope. No reason to even bother.
  • XML Sitemaps: I’m pretty sure no human has ever visited this file in an attempt to sort out the pages on your site.
  • Webmaster Tools Registration: Without engines, there wouldn’t be any.
  • Keyword Research: I think this practice would be more like advertising copy – think Mad Men.
  • Keyword Targeting: Why worry about keyword placement for anything other than conversion rate optimization?
  • URL Canonicalization: No need – visitors are getting the content either way.
  • Accessible Link Structures: So long as you’re not worried about the >2% of visitors who can’t see Flash, go ahead and build rich applications to your heart’s content.
  • Robots.txt & Meta Robots: No engines, no reason to direct engines.
  • Link Building: Unless it’s specifically to draw in relevant traffic, why bother?
  • Creating Vertical Search Feeds: That’s going to be time wasted.
  • Information Architecture: While there’s good reasons to do some of this for users, a significant portion of the accessibility and link hierarchy arguments are made moot.
  • Redirection: Without engines, we can use whatever method is convenient – javascript, meta refresh, 302 – it makes little difference to the user.
  • Rel=”Nofollow”: Internally or externally, it becomes a pointless attribute.”

So where does this leave us, the digital marketer? Well it leaves us knowing that the future – at least until Google changes again – is what most of us always knew is should be, and what we know we should be delivering: better, genuinely useful and engaging content, designed with an editoria-type styleguide and built to suit a target audience. There’s always room for marketing flair and panache, but delivering content that is genuinely better, more thoughtful, deeper, more empowering and usable will achieve your true goals: consumer interaction, bigger databases and higher chances of conversion.

Image source.

What’s free worth?

Everything’s free on the internet.

Free is everywhere you turn, it’s all so easy, sign up to this, sign up to that, register here, login, sign in, don’t worry it just takes a second to sign up, we just need your email, no – + or $ symbols, choose a password with a minimum of 6 characters, 4 numbers and an explanation of how gravity works, tell us why it gets cooler just before the sun comes up and the square root of 347.6…. quick you’ve only got 5 seconds to complete this form, 5, 4, 3, nah just kidding take as long as you want we need your details to expand the database, which in turn helps us value the company to prospective buyers. Oh yeah that’s right, leave your wallet in your pants champ, this one’s on us.

But it’s not free. ‘Free’ is your time. ‘Free’ is arguably more expensive. Who has nothing to do?

‘Free’ is asking you to pull away from whatever else might have been otherwise occupying that timeslot and dedicating it to itself. ‘Free’ is opportunity cost.

And because time is the one thing you can’t get more of, free products that exist now and are developed in the future will have to either be unbelievably unique and valuable – which is an initial short term proposition because others will see what you’ve done and attempt replicate it, so relevant and constant innovation will be necessary (think MySpace vs Bebo vs Friendster vs Facebook) – or are very quick and accurate but not worth paying for, (think and Google).

On the other hand the product can be good and valuable and can be subscription based and you may just end up with a great and even better product, more solid financials and a more qualified database that care enough to buy and use the product.

Because when something is free and grows quickly, it’s also subject to faster competitive loss in the event of getting out-maneuvered, out-innovated or out-played.

Is ‘free’ the answer? Probably not, excellent products will excel, and the rest will fall by the wayside.

Free is just a price point, the product is still king.