The Apple v. Does (O'Grady v. Superior Court) case, where Apple tried to subpoena online publisher's information for an investigation, has been well-analyzed (big win for EFF). I'm going to skip the (somewhat misconstrued) Bloggers vs. Journalists! aspects, because they've been chewed to death, and focus on a synthesizing post about the Wikipedia elements, to highlight some other factors.
In "New Age judge blasts Apple", Andrew Orlowski states:
However Apple has struck gold in finding a techno utopian in a state of rapture. Judge Rushing cites Wikipedia as a source, a mistake which earns students an 'F' grade today. He talks about the need to disregard economics and sociology in favor of a "memetic marketplace" - whatever that is - and allows himself some flights of technological rapture.
[N.b. - I think "memetic marketplace" was the judge's way of being hip, where a more staid judge would have used the traditional phrase "marketplace of ideas"]
Actually, I suspect the problem was recognized, and Joe Gratz analyzed it in Apple v. Does Court Cites Wikipedia:
In 2003, I opined that citation to Wikipedia in the course of a legal argument was asking for trouble, since anyone - even opposing counsel - could pull the factual rug out from under one's argument.
The California Court of Appeal, though, dodges the problems I foresaw. It cites Wikipedia almost exclusively for the definitions of internet argot and geek pop culture references: ...
These articles are particularly likely to have reached an accurate and complete equilibrium, since the core Wikipedia constituency is deeply familiar with their subject matter, and that subject matter is not hotly contested. While one can imagine a flame war emerging over precisely what is or isn't a BBS or a blog, the opinion cites Wikipedia in the same situations I do - when the reader's general knowledge of the subject matter will assist understanding of the argument, but the underlying details aren't dispositive of the argument's merit.
In other words, citing a geek trivia collection to define popular geeky terms, is probably not dangerous.
And, besides taking apart some of the Bloggers vs. Journalists! hype, in Courting Wikipedia, Citing Wikipedia, Jon Garfunkel reveals:
In an earlier footnote, Judge Rushing defended his use of Wikipedia: "As with many of the concepts in this opinion, the most authoritative and current sources of information may themselves be found on the web." Of course, "on the web" is as precise as saying "in printed materials." The difference is that information printed materials generally can be traced. With the web, it's a bit trickier. One searches the Bear Flag League, and find out that they're a group of conservative California bloggers, and then search more to find out that the founder was Justene Adamec. As for who came up with "we blog," that is Peter Merholz, who explains such here. As for the quote in bold, it's a meaty passage out of Wikipedia. In this case, it's practically impossible to find out who had authored it, unless the author steps forward.
It was me. And I'm absolutely delighted.
Maybe this what they mean by anyone can contribute :-).
[The EU lawmakers consider taxing emails, SMS messages" story is echoing now. I wrote the following debunking for a mailing-list, in a futile attempt to use the wonderous power of The Internet and unpaid freelancing, I mean, "citizen journalism", to debunk bad reporting. We see how well that's working ...]
As far as I can tell, this story is being blown way, way, out of proportion. The EU is nowhere near taxing e-mail or text messages. One member put forth the idea in a discussion, but it's unclear if anything ever happened after that. I managed to trace back what might be the source:
"Participants were not short of imagination for new forms of funding: taxes on flights, company profits or even on short text messages sent by mobile phones. The supporter of this idea, EP own resources rapporteur Alain Lamassoure (EPP-ED, FR), also believed that the new system would have to be clearly linked with benefits drawn from the European Union. Thanks to the internal market "exchanges between countries have ballooned, so everyone would understand that the money to finance the EU should come from the benefits engendered by the EU," he explained."
Then there was an interview with a newspaper, EU Observer,
which is now locked in pay-archives, though there's some excerpts here:
Alain Lamassoure has a website here:
There's a forum where he's responding, but it's in French, and I don't feel comfortable attempting to translate his replies.
But there's a vast difference between some woolgathering, and any sort of formal proposal, much less anything being enacted.
But even when this appears to work, so what? Seth Finkelstein notes that in some situations, throwing darts at a dartboard produces excellent results. Citing the Wall Street Journal Dartboard Contest, he writes,
"People are fascinated by ways in which data-mining seems to represent some sort of over-mind. But sometimes there's no deep meaning at all. Dartboards are competitive with individual money managers - but nobody talks about the 'wisdom of darts'"
Seth Finkelstein points out an immediate consequence which is already taking place. Wisdom... gained such traction on the net, because of its cultural distrust of expertise. This stops where the net stops, however - it's hard to envisage even the most militant Wikipedia fan choosing to be operated upon by amateur heart surgeon. But it's accelerated the process of deskilling, and the new flood of cheap (but wise!) amateur labor promises to depress wages even further.
There's been some discussion about changes to policies regarding restrictions concerning who can edit some Wikipedia articles, and what this means for the ideals of (lack of) collective intelligence.
I think it's important to distinguish between the "silk purse out of sow's ear" argument, and "free labor" argument. The hype around Wikipedia is basically, bluntly, that it's magic. Throw together a bunch of sausage fragments, cover with a mystic curtain, incant the spell "Modsiw Fo Sdworc", and poof - out will come a silky article.
When it's found out there's really a man behind the curtain (any administrative actions to halt the editing process when it goes awry), some Wikipedia boosters seem resentful about ruining the trick.
Without the magic, if all that remains is an example of how a heavily hyped project with very elaborate ways of escaping accountability for errors, can produce material on the level of a term paper, without paying the writers - well, one has to wonder at exactly who finds that so exciting, and why.
It's not a revolution in knowledge, it's an innovation in deskilling. It's taking the graduate-student model - get devotees to work for no money, to enrich and aggrandize the project-head - and applying it to middlebrow work instead of academic work.
People are fascinated by ways in which data-mining seems to represent some sort of over-mind. But sometimes there's no deep meaning at all. There's a well-known experiment in picking stocks: dartboards are competitive with individual money managers - but nobody talks about the "wisdom of darts" (because there are no DartBoard 2.0 salesmen ...).
... think instead about how to get a few key people to read what you are blogging - that's what will really bring the traffic. -- Robert Sc*ble
There's been awe-inspiring traffic results from the recent 10 Things You Might Not Know About Google posting (which was in fact written by Philipp Lenssen, as part of a blog swap). For edification, here's some numbers:
Total page views: 97971
Total unique IP addresses: 86827
Number one source: digg.com: 38822 unique IP address visitors (~ 45%)
Bloglines subscribers (main feed): up from 222 to 236
Technorati rank: From about 120 sites linking to 180 sites linking, raising the blog rank from around 15,000 to around 9,000 (!). Maybe I should promote myself to C-lister nowadays, rather than Z-lister.
And lots and lots of blog-spam.
There feels like there's yet another lesson in here (besides the now-tedious fact that I'm wasting my time on unedited-voice essays and censorware/DMCA net activism - contrary to blog evangelism, the little guy does not get heard) . Launch a Google-oriented site? I keep going back and forth on the "business case". Maybe.
Top site referers by unique IP address after the jump below:
The Children's Internet Protection Act (CIPA), requires [censorware] in most schools and libraries for adults and minors alike. A new report from the Free Expression Policy Project at the Brennan Center for Justice explains the effects of CIPA and then analyzes nearly 100 tests and studies that demonstrate how filters operate as censorship tools. "Internet Filters: A Public Policy Report" concludes: Although some may say that the debate is over and that filters are now a fact of life, it is never too late to rethink bad policy choices. The report is available at http://www.fepproject.org/policyreports/filters2.pdf
This is a great resource for collecting much of the references for censorware research.
Skimming through it was a bittersweet trip down memory lane for me. I was the secret decryption source for many of the early studies mentioned, though that's not mentioned anywhere (and I'm not criticizing them at all, no particular reason they should note it, just describing why it's so bittersweet). Some later reports done under my own name are there, which is good. So all in all, I suppose I did make a difference.
This article is written by Philipp Lenssen as part of the Blog Swap with Seth Finkelstein – Seth's article on 10 Things You Might Not Know About Censorware can be found at Philipp's blog.
Not too long ago, you couldn't enter more than 10 words into the Google search box. Or to be more precisely, you *could*, but subsequent words were ignored. I bet the Google founders were thinking "10 words ought to be enough for everyone," and mostly there were right – but for some advanced uses, especially with the Google Search API, a little more is helpful. Then, a while ago, Google increased the words limit to 32 words. This is probably OK for a few more years!
Another change is that Google ignores stop words nowadays. Stop words in search engines are words like "the" or "a" which are too tiny or common to be useful additions to most searches. However, Google is now accepting them as semi-normal words (one remaining difference being that they're not highlighted, or linked to the dictionary). This means in Google.com, you get different results when search for [the tale of a cowboy] vs [* tale * * cowboy] vs [tale cowboy]. (I'll be using square brackets around search queries – they're not to be included in the search.)
Another operator changed its functionality during the years; a couple of years ago, you could only query Google for [site:something.com], but not [site:something.com/something/]. Today, you can add folders to the site operator.
These days, everyone puts a Beta tag on their 2.0-ish web app. But did you know back in 1998, when Google launched their search, it was also in Beta? Take a look at a copy stored in the WayBack Machine to see it. Be aware the page might look quite ugly by today's standards... heck, it was probably ugly even back in 1998 (then again, so was my homepage in 1998!).
While no one outside Google knows for sure, it is often speculated that Google's PageRank value – the "authority rank" (or quantity of backlinks which themselves receive lots of backlinks) – is a much more precise number than the plain 1, 2, 3... 10 values. A float, not an integer, if you will.
So, for example, if you're looking at a site which shows a PageRank 8 in the Google Toolbar, its internal PageRank may be something like 8.355 (or however precise Google's number is). But we don't know for sure – maybe Google's algorithms prefer speed over quality when it comes to the recursive PR calculations of billions of pages. This calculation might not be a breeze even for Google's 10,000 - 200,000 computers (that's another number we can't be too sure of outside of Google).
I guess when you're an uber-geek, like Google founders Larry Page and Sergey Brin are, you are also very competitive (to the point of risk being arrogant towards slower thinkers, maybe). John Battelle in his book The Search (page 67/68), tells of how the two met at Stanford University in the summer of '95:
Like most schools, Stanford invites potential recruits to the campus for a tour. But it wasn't on the pastoral campus that Page met Brin – it was on the streets of San Francisco. Brin, a second-year student known to be gregarious, had signed up to be a student guide of sorts. His role that day was to show a group of prospective first-years around the City by the Bay.
Page ended up in Brin's group, but it wasn't exactly love at first sight. "Sergey is pretty social; he likes meeting people." Page recalls, contrasting that quality with his own reticence. "I thought he was pretty obnoxious. He had really strong opinions about things, and I guess I did, too."
"We both found each other obnoxious," Brin counters when I tell him of Page's response. "But we say it a little bit jokingly. Obviously we spent a lot of time talking to each other, so there was something there. We had a kind of bantering thing going."
You might have come across the official Google Blog. But did you know Google actually has 16 different – and all official – blogs (give or take one)? Here's the full list (I'm also collecting these all on one page):
You heard about how Google self-censors in China (e.g. human rights sites top-ranked by Google in other countries are missing in Google.cn). But did you know that Google showed censored search results in other countries for years, sometimes even without showing a disclaimer that something was missing? In Germany and France, that was the case.
You can see this for yourself if you first search Google.com for [site:ety.com]. This will result in 9,940 results. Now if you do the same search on Google.fr – Google France – you get zero results. However, there's a disclaimer at the bottom:
"In response to a legal request submitted to Google, we have removed 260 result(s) from this page. If you wish, you may read more about the request at ChillingEffects.org."
Note Google's disclaimer is showing the wrong number of missing pages – it 1,000s, not 260. Following the link to Chilling Effects, we see this text:
Google received complaints prior to March 2005 about URLs that are alleged to be illegal under U.S. or local law. In response to these complaints, one or more URLs that would have appeared for this search were not displayed.
In other words, Google is not censoring this out of their own belief, but by following government requests. Now what's ety.com anyway, except being one of the many censored domains? A quick glance will show it's some kind of stupid Nazi propaganda site, illegal by some country's standards. But you know what Voltaire said... "I may disagree with what you say, but I will defend to the death your right to say it."
Since around 2001, Google on their front-page were proud to show off the number of pages they search through... a number that went from a billion and a half to over 8 billion (according to Google). Today, Google doesn't show this number anymore. Maybe Googlers – that's what Google employees are called – realized that results quality beats results quantity. Or maybe they just realized that by sheer numbers, competitors were winning. In August 2005, Yahoo in their blog announced:
As it turns out we have grown our index and just reached a significant milestone at Yahoo! Search – our index now provides access to over 20 billion items (...) [including] over 19.2 billion web documents
Today, when you want to find out about the Google index size, there's a workaround though: search Google for ["* *"] – that's a good estimate. Right now, it's displaying 25,270,000,000 pages. In a direct comparison, when we search for "the" on both Google and Yahoo, Google shows a couple of billion pages more. Then again, these numbers are hard to verify – Google only lets us see the first 1000 results for each query. And in the end, who wants to see more than that anyway? Most people don't even go beyond the first 10 results, and rather adjust their search query instead!
If you're a developer utilizing the Google web search API, and you need way beyond the 1,000 requests per day Google offers by default, here's a tip: you can email the Google API support and request more hits for your API key. Depending on your projects and traffic needs, which you will have to outline, Google just might grant you the request!
While Google doesn't have its own comic book search engine, you can still achieve good results by going to Google Images, setting the file size to "Large images", and then searching for [comics]. Using this setting, you can also search for an artist's name, like ["john byrne"], ["john romita jr"], ["frank miller"] or ["daniel clowes"]. You might even have some fun adding your own speech bubbles to the comic book pages you find (use a free font like WebLetterer for best results)...
OK, so Writely – which Google recently acquired – is not really a chat, but an online word processor. However, by inviting others to your Writely document, you can group-edit any document... and see the changes by others merged into the document as you type! This feature allows you to chat with a group, and you can have fun with positioning text on different places on the screen, wiki-editing what others wrote, or adding colors and images.
Demoblog - Google-bombing for Alaa, press release:
On Sunday May 7, Alaa Ahmed Seif El Islam, a prominent Egyptian blogger and political activist, was detained in Cairo by the Egyptian authorities while protesting the earlier detention of political activists rallying for a free judiciary.
On Tuesday, a group of bloggers connected to the site Global Voices decided to launch a different kind of campaign, one that would use the mechanics of the internet itself to bring world-wide attention to Alaa's case. They launched a campaign called "Google bombing for Alaa," an effort to manipulate the ranking of the world's search engines so that a blog dedicated to freeing Alaa (http://freealaa.blogspot.com/) would be the first page displayed when a person searches for information on the word "Egypt".
(via Jon Lebkowsky)
This is interesting, for a few un-obvious reasons. "Egypt" is a word which has many, many, links. So I doubt it'll get much traction, certainly not for a long time. It then turns into a kind of meta-experiment, where media attention is obtained for the attempt itself.
I ought to try to figure out what's causing this, and how much fun I can have with it, before it gets fixed. In case the problem isn't obvious, in reality the Electronic Frontier Foundation and IPcentral are NOT discussing a post from my puny Z-list blog.
And here I was, feeling unhappy given the few dozen other readers the post had garnered, especially given the effort it took. Now I'm told it leads the discussion. That must be true, the computer says so! :-)
[Update: Looks like someone involved might have dropped by. Maybe I shouldn't have written this post. Oh well, at least I got a chuckle out of it.]
Solveig Singleton has written a "pro-DMCA" report, in part replying to an earlier Tim Lee "anti-DMCA" paper. The pro-DMCA arguments are being extensively criticized e.g. by EFF and Ed Felten's (not) "Happy Endings". Against my better judgment, I looked at the report, and immediately spotted some deeply flawed discussion of Linux and the decryption of DVD's (DeCSS). For whatever good it'll do, since I know something about the topic, I'll toss this into the rebuttal of the DMCA defense. Solveig Singleton states:
Tim Lee's recent paper for the Cato Institute unfortunately contains a number of errors: ... Describing the DVD-CCA, which licenses CSS keys, as having neglected the development of Linux players, and attributing the development of DeCSS to this failure. First, CSS keys are licensed to anyone willing to comply with the license and pay the $15,000 application fee. Licensed Linux players include software such as Linspire, and LinDVD, as well as hardware such as MediaReady Digital Media Center product line from Video Without Boundaries, and have been available for a number of years. Furthermore, DeCSS was developed as a Windows product and the thesis that it was developed primarily to support Linux as opposed to simply break DRM is highly dubious.
1) The development of a free software Linux DVD player was indeed driven by lack of availability of licensed Linux DVD players at the time (let's not quibble over whether to call that "neglect" or not).
Below are the relevant refutations from Matthew Pavlovich's trial testimony
A. After getting to the point where we had gotten to where we needed to begin the DVD project, I spun a sister project off from Utah GLX that became known as the Linux Video project or for short, LiViD.
Q. Why did you start LiViD?
A. Quite frankly, I wanted to play DVDs on my Linux box. I received documentation for a hardware decoder that worked with my video code at the time and I wanted to be able to utilize that decoder chip and the DVD drive and movies I bought under Linux.
2) While the DeCSS program is what led to the court case, the history shouldn't be read apart for the whole development project for a Linux DVD player, which was inarguably about playing DVD's on Linux.
Q. Was DeCSS part of or connected to the LiViD project?
MS. MILLER: Objection, your Honor, no foundation.
THE COURT: Overruled.
A. Yes, the DeCSS has actually a long history of being related to the LiViD project. The CSS project or CSS process has a few phases, the authentication between a decoder or the piece whether it be hardware or software that takes the DVD data and converts it here in audio and video presentation and the actual decryption where it decrypts the encrypted content.
The first part of that process was the authentication and that was written and released for and under the LiViD project. DeCSS utilized the CSS routines from the LiViD project as a piece of DeCSS. DeCSS, the source code was later translated, the core functions were used in the decrypting part of the DeCSS for the Linux video player.
3) And the Windows aspect means less than one might think.
A. The file system found on DVDs is the UDF support for Linux was in infancy at the time, so one would need to have access to read the data before being able to decrypt the data on the disk, so yes someone would have to use windows or an operating system that supported UDF to develop DeCSS.
That one paragraph took me a page, and more time that I should have spent on it, to dissect. One other note, going back to Solveig Singleton:
Commentary on the DMCA at this point needs to be less strident and much more constructive. If the process for deciding which applications should be exempted from the DMCA is not working well in some areas, how could it be improved? Exactly how could the exemption for security and encryption research be strengthened without transforming anyone with a little technical skill and an ideological bent against DRM into a "researcher?" Or is it rather the hope of critics that this would happen?
Solveig, am I someone with "a little technical skill and an ideological bent", or a researcher? (for the purposes of a lawsuit, these are obviously disjoint categories - it's trivial to joke "both", but one can't be a little bit sued). That's not a completely rhetorical question. If the apologism algorithm is to trivialize the DMCA issues against high-status people (Felten), and to sneer at the DMCA issues against low-status people (DeCSS), that's a poor start from which to call for less strident and more constructive commentary.
Yahoo Italy has been denying results for searching certain search keywords, reported by Jacopo Gonzales, echoed by the Google blogs ( Inside Google, Google Blogoscoped, SearchEnginewatch.com, SEW Forum)
To summarize what's known, including some of my research:
1) A few affected words have been found: "shit", "shithead", "preteen"
2) The pattern-matching is tight - searching [shit] will be denied, but [Shit], [sHit], [shIt] and [shiT] are all fine, as well as [shit shit]
3) It's very easy to see the problem at a low-level. Searching with a denied keyword generates a HTTP 302 redirect response to the Yahoo directory, whereas anything else gives a normal HTTP 200 OK response. That is
Gives a low-level HTTP response of:
(which is a redirection to the directory)
Someone might want to spin through wordlists to find other words (I'll pass). Though I've found [shits] and [shitting] are affected too, as well as, err, the Nabokov character (this post has enough strange keywords!)
All in all, while some people are wondering if this is a censorship issue, it looks at least partly like a bug to me. Some wordlist has gotten misplaced - "shit" is much too mild a word to be a censorship target here.
"Hoodwinking the censors" is an interesting article about
anti-censorship software being developed at the
OpenNet Initiative [Update:
(hat tip: Philipp Lenssen).
I'm going to skip the technical issues of the subject, and take the
article as an opportunity to write a fragment of memoirs applicable
to the "inside view of net-politics" part of the description line
above (note I know at least two people appearing in the article will
be reading this post, both of whom have kindly encouraged me to
continue this blog, which is all the disclaimer necessary!). Namely,
More than a few people view the work of the Citizen Lab, and Psiphon, as important. The ONI as a whole receives funding from several major U.S. foundations that promote peace and democracy, including a recent $3 million from the MacArthur Foundation in Chicago. In addition, the Citizen Lab has received money from the New York-based Open Society Institute, which supports human rights projects and whose patron is billionaire George Soros.
At some point in late 2003, early 2004, somewhere in the mix of my winning a DMCA victory, and being turned down in the n'th attempt at getting a policy position, it became clear that if I wanted to seriously continue with Internet freedom activism, I was going to have to set up my own organization. Appoint myself Executive Director of something like "The Center For Censorware Studies". Go after foundation funding for money, maybe do the conference circuit.
I seriously considered it. But it just didn't seem like a workable idea. At the time, I'd gone through draining unemployment from the tech-wreck, and the programming market was finally picking up. Inversely, getting funding seemed like it was going to require a lot of work in competition with organizations which were far better "connected" than I was (Harvard!), so I'd be at an extreme disadvantage.
Sometimes people would suggest working for an existing group in a support role, but that was extremely problematic. Nobody wanted the specialized technical decryption work, it's not cost-effective for its legal risk. For generic programming, they could hire someone much less senior than me. And it wasn't a resume-enhancing job for me either. So, purely as a job, it was hardly a good deal for either side. Compare:
The third member of the Psiphon team, 42-year-old Michael Hull, was hired in January to make the program user-friendly. ... Trained in physics, Hull sold his document encryption company in 2003. "Over the years I've been building commercial, private software to solve problems for corporations," Hull says. "So this is nice because it kind of flips it all around. It's a way to give back while I have a chance."
Good for him. But it's why I sometimes say I regret doing so much
anti-censorship effort, and not taking my chance at the tech IPO goldrush
when money was falling from the skies (or at least it seemed that
way). It seems that in order to do such activism, one has to be (the
following are not exclusive):
1) Professional policy person (lawyer, lobbyist, etc)
2) Institutionally supported (i.e. an academic)
3) Independently wealthy *or* unconcerned with employment
And, sadly, I don't fit any of the categories, or been able to find a functional way to get myself into any of the categories. I've never been able to solve this "business model" problem.
Big win for the Media Bloggers Association, as the lawsuit against the Maine blogger is withdrawn. Good job by everyone involved.
Though all the Usual Suspects are doing their set-pieces about Blog-Power, a few days ago, MBA President Robert Cox had a very interesting detailed blog post describing the strategy used:
The real story behind the "Maine Blogger" story is that this blogstorm did not just "happen". I personally spent several weeks developing a media strategy which we launched last Thursday morning. The original goal was to get the story in front of 3-5 mm people by Friday night. We easily surpassed that figure and the number continues to grow.
Once we were ready to drop the story, I reached out to the membership of the Media Bloggers Association with an "MBA Legal Alert" and they responded in force. Hundreds of bloggers responded to the MBA's request to post on this story and make their readership aware of what was happening in Maine. We also sent out a traditional press release to our "press list" and added in about 100 Maine/Travel media outlets - that's how the Globe got the story. Once the ball was rolling lots of other folks got behind the effort and Lance was a full-fledged bloglebrity.
This kind of blog/MSM media strategy is part of the two-pronged approach we take as part of our Legal Defense Initiative. I think the real story is that this strategy can be - and has been - so effective.
And I agree - it can be, and has been, so effective. But ... it's important to realize just how old-school top-down this is structurally. In fact, scarily so. Work with people who have big megaphones, get them to echo the story, then go up the media pyramid. It's extremely traditional. Now, the powers here were used for good instead of evil. But, still, what if it were the reverse?
It's a good thing if lawsuit-filers have to take into account that their target may become a cause célèbre, and get the support necessary to fight back. But note also that it's mathematically impossible for *all* bloggers to become cause célèbre. There's only so much support to go around. So nobody in particular can count on being supported beforehand.
Ultimately, I think the lesson is that media organization is (still) media organization.
Tom McCartin, president of WKPA, is most concerned about Mr. Dutson's public posts because if potential clients search for the agency online, they will likely see Mr. Dutson's critique-filled blog before the agency's own Web site. As a result, Mr. McCartin says his business, which sees capitalized billings in the $40 million range, has been hurt. And he wants to protect his reputation.
I'm dubious about the likelihood about appearing "before the agency's own Web site". Maybe that would be true for an A-list blogger. But for anyone else, that would be rare. Now, appearing on the first page, that would be possible in many cases.
The article cites a mainewebreport.com blog post from Feb 28 which I'll quote further:
I noticed in Maine Web Report's stats that someone found the site through a Google search for "Paino Advertising" ... this can't be good for the company's reputation. Sure enough, searching Google for Paino advertising brings up this site on page 2 (not great, I know! But we're moving up gradually). Not good at all for an ad firm.
Did an ad agency really sue over this (or at least have it be a major factor)? It would be notable if true.
Part of the rhetoric around the lawsuit against the MaineWebPeport blogger is a large amount of "Google-huffing". The plaintiff, an advertising agency, is going to have Google results for its name dominated by criticism of bloggers. Note while I think that in principle it's a good idea that the more powerful should need to consider a public backlash when suing the less powerful, there's an aspect of meet-the-new-boss-same-as-the-old-boss in the concept that a handful of bloggers have the ability to determine the public perception of an entity. After all, there can only be ten top-ten results (and sites can appear twice). So we're talking about an extremely small number of people.
One of the very few advantages of my having a blog is that it provides me a means of running Google experiments. Despite being a Z-lister, I have accumulated enough site PageRank and such in my weblife (from other work) that I often rank far higher than my lowly blog position would otherwise grant me.
And indeed, my earlier post on the case is now in the top ten Google results for the plaintiff's name. But there's only been around five hits to it from various searches. So, sorry to blog boosters, I'm not sure the Google-huffing is accurate here. The mainstream media coverage is likely going to have far more of an impact based on sheer numbers.
Obviously, there's instances where such effects would matter. But it's going to depend a lot of the status of the critics and the relative power of the entity being criticized.