Skip to main content

It’s been a little over one week since the Pear Analytics Twitter study we posted reached blogs and media outlets all around the world. We were extremely pleased with the outcome and all of the constructive feedback. As mentioned in the study, we will be monitoring Twitter usage and behavior in an ongoing manner, and part of this post will explain what we have planned next.

With the large amount of media attention and exposure our company has received as a result of this study, I would first like to take this opportunity to not only clarify a few things, but ask for your continued input so we can make the next study even more meaningful.

Setting the Record Straight

There were thousands of blog posts, comments, and general commentary about this study over the last week, and I don’t think I’ve ever seen such a wide spectrum of responses, and we welcome all of it.  There were people who loved it, hated it and everything in between.

First, let me say that we are big fans of Twitter.  We use it to share with friends and family, to share our products, to brag about our clients, and in general to share and receive useful information.  Personally, I have found great resources through the folks I follow on Twitter; but I admit, I can’t keep up with all of it and there are an overwhelming amount of tweets that do not interest me.

Second, we’re not telling you how to use Twitter, or that people are using it the wrong way – we simply reported on insights from what we found in an interesting way.

Third, we were not paid to do this study, nor have we made one dime on it.  We also did not spend one dime promoting it.  My friend at Sales By 5 sent one single email to Mashable to see if they would be interested in sharing the report.  They covered the story and other outlets picked up on the conversation.  We did subsequently sent additional notices to media outlets once it began taking off.

Last, I have a personal relationship with Paul Singh, the founder of Philtro.com, and I purposely included his tool in our whitepaper because I think it’s a great product – and it’s FREE; so again, there is obviously no gain in it for us, and all he really gets are some new subscribers to further refine his tool.  We will be partnering on future studies.

Why Are We Doing This

Pear Analytics provides insights to marketers through data analysis. We did other studies on Twitter in the past, as well as other whitepapers on website visitor loss and how marketers can effectively track off-line media.  Every day we meet with clients who are not using Twitter and want to learn more about how to use it, why they should use it, or what other people are using it for. We believe there are usage and behavior insights about Twitter that many are interested in.

Criticisms

As I mentioned before, there was certainly a fair share of constructive feedback about the study, and so I’d like to share what the major ones were and what we plan to do about them moving forward:

Sample Size – many commented that our sample size was too small for the amount of tweets.  Before we started the study, we assumed that there were about 3 million tweets per day in the U.S. alone.  Several of us have math and engineering backgrounds, so we determined the sample size using statistics.  We even checked with some old college profs just to make sure we weren’t completely off track.  The result was that 2,000 tweets would be sufficient.  Moving forward, we will increase the sample size, provided we have sufficient resources.

Subjectivity – many folks emailed us asking what constituted “pointless babble.”  The criteria was this: the tweet did not have an “@”, “RT”, “via” or short URL in the tweet.  If it met this, and did not appear to be useful to a large percentage of your visitors (more than 50%), then we put the tweet into this bucket.  Believe it or not, these became very easy to spot in the public timeline – tweets like “I just saw a raccoon” or “I need to buy some shoes today” fall in this category.  Now, if you are a hunter or the owner of a shoe store, you would argue that those tweets are not irrelevant, right?  Fair enough.  Part two of our study is going to allow real users to vote the tweets as “pointless babble” themselves.  More on this in a bit.

Public Timeline – some folks claimed that by sampling the public timeline, that’s not an accurate representation of the kind of tweets one would receive, and that conceivably, you would only follow relevant people who always say relevant things.  I don’t really believe that, but what we’ll do is let the users decide what is “pointless babble” – that way, it’s from a user who supposedly hand selected the people they want to follow.

Categorization – lots of comments claimed that the categories were vague and subjective. I still feel the categories are fine – but what we can do is sub-categorize on the next round.  For example, on the News category, we could break it out into mainstream, tech, social media, etc.  For Conversational and Pass Along Value, we could add which percentage of those had links.  This keeps the primary categories consistent between reports for comparison purposes.

Moving Forward

As with any analytical study or analysis, there is iteration and refinement.  We feel that for our next report on Twitter usage, the primary focus is to collect data from real users.  To do this, we are partnering with Philtro.com to refine the “pointless babble” category, but will continue to pull random tweets off the public timeline for the remaining categories, but will create sub-categories for further refinement and insight.

We created http://pointlessbabble.pearanalytics.com to show you a live feed of the tweets deemed “pointless babble” by real users and by an advanced algorithm that can detect tweets of similar nature.  Keep in mind this is in beta testing for the next few weeks until we can perfect the process.

Thanks again for the support, criticisms, and other feedback.

Ryan Kelly

CEO, Pear Analytics