Final Project: the Analytics Scavenger Hunt


This is part 3 of a project using the Analytics Scavenger Hunt on this user journal. There are clues for users to follow in order to visit and scroll down 5 pages. The pages are in a variety of formats, some blog posts, others pages reachable from top navigation. Content is varied as well, including images, video, and a link to an online game. All clues are within one click away from the answer.


I tracked a combination of events and destination goals prior to launching content experiments and added several additional to run in tandem with the experiments.

Implementation: I set up the goals using Google Analytics and Events using the WP GA Events plugin.

Goal Table


Event Table


Goal Setting

Goals Part 1

  • Scavenger Hunt: This was an early destination goal to track the number of users click a link that I promoted to the landing/instruction page of the hunt.
  • Hunt Comment Funnel: This is the first funnel set up to track progression from landing page through all five subsequent pages.

Associated Events: I set up a series of scrolling events to track user activity on pages where they needed to scroll to read a clue. These are the same page tracked in the hunt comment funnel goal.

Reasoning: When I set up these goals and events I was primarily interested in analyzing traffic patterns and identifying areas where drop-offs occurred during the course of the hunt.

Goals Part 2

  • Comment: This is a destination goal that tracks user progression from the final clue page to the comment entry box on the landing page. It is the original page in content experiment 1.
  • Comment Var2: This destination goal is a variation page created for the content experiment 1.

Associated Events: There are two click events, one for each variation, to track when the user clicks the link to go to the comment page.

Reasoning: Posting a comment is the final task of the Scavenger Hunt. I wanted to track how many users where arriving to the comment page and also, whether they were using the links provided, or other navigation methods.

Goals Part 3

  • Hunt Squeak to Game: This destination goal set was created to measure whether users were progressing from the “Squeak the Squirrel” page to the the next page in the sequence. The series below replaced this goal once I began running content experiment 2.
  • Hunt Squeak Var1 to Game Var1: The following destination goals were created along with content experiment 2 in order to track completions for each possible combination of variation pages.
  • Hunt Squeak Var1 to Game Var2
  • Hunt Squeak Var2 to Game Var1
  • Hunt Squeak Var2 to Game Var2

Associated Events: There is one scroll event to track the total number of users to scroll to the bottom of the page to read the clue.

Reasoning: Initially, I set this goal, because I suspected this page be more likely to have drop-offs then others in the course of the hunt due to the amount of content and the complexity of the tasks needed to progress to the next page.

Content Experiments

Experiment 1

Goal: Improve Commenting at Completion


I set up a content experiment in Google Analytics to determine which of two pages produced more conversions.


The original page in this experiment had a link to the comments in the text of the “Success” message.



In the variant I added a large button to the comment page below the “Success” message.


Firefox Fix for Variant

Shortly after beginning this experiment, my informal user testing revealed that the button was not working for Firefox users. In order to continue the experiment with this variant running, I chose to add new content at the bottom to make it more accessible.


I also added a click event to the the new Firefox link to see how many users used this link to comment.

Results and Observations

After 6 days of data collection, experiment 1 is showing that the original page is outperforming the variant. However, the variant was only delivered to 14 sessions while the original had 30 opportunities to run.


For this experiment, it has to be taken into account that at least 2 Firefox users contacted me to report difficulty with a non-functioning button. Their feedback precipitated my adjustment to the content to aid Firefox users. Both of the users who contacted me did convert to to the comment page, but they had to click off this page to get there.

I had expected the variant to be more successful than the original. The time running without the final revision may have contributed to lower performance.

Comparison Goal Tracking

Findings from the event tracking of the variation page are consistent with these results, with 1 click on the button and 1 scroll event to the bottom of the page recorded, plus 1 destination goal conversion.

At this time, it doesn’t appear that the Firefox link added to the Variant has been clicked.

Overall improved rates of commenting.

Even though the content experiment only shows that there were 6 conversions, 17 new comments were posted during the experiment. That means that 11 users took a different route to the finish.

At least 2 of these new comments came from Firefox users (from prior to added Firefox link) who used a different path back to the comment page. That leaves 9 users who didn’t take the predicted path to the comments.

Next steps for improvement.

It would be best to continue running the experiment for another week in order to collect more data, however the timetable for the project doesn’t allow for that much extra time.

Therefore, what can be learned from data and observations at this time? Considering that 11 out of 17 users took a path different than predicted, it would be wise to reconsider the task itself. Traveling back to the landing page after moving 5 or more pages away, may be more problematic than whether a link or button is noticeable.

If this project were to continue, the final page of the hunt should be a page where comments can be submitted without moving away to yet another page. Perhaps, adding a comment form to the online game rather than the blog would be even better.

Experiment 2

Goal: Improve Clarity of Path from “Squeak” to Finish Line


After informal user testing, I received consistent feedback that this was the most difficult step in the hunt. I set up a content experiment to track two versions of “Squeak”.


Many users felt the original version didn’t give enough information about how to locate the next clue.



In the variant, I added bold to the most relevant information in the clue and also a hint to help users find the link to the game in the top navigation.


Results and Observations

After 4 days of data collection, results were not very enlightening. Out of 8 sessions tracked in the experiment only 1 user was delivered the variant. All others received the original page.


Ideally, more time is needed to best evaluate whether one page is performing better. As in the previous experiment, I expected the variant to perform better.

Difficult yet charming.

Although, this experiment appears to show the original content as the winner, many users have given feedback stating otherwise.


Amusingly, even though users find it to be challenging, “squeak” has charmed many who expressed that the video was their favorite part of the experience.

Next steps for improvement.

As in the previous experiment, if the timetable for the project allowed for an additional week of data gathering, it would be helpful. Since this is not possible, based on the data available and clear user feedback, this content should be revised to make the progression between steps easier for the user.

Possible solutions

Reduce the amount of scrolling. Positioning the clue closer to the video might improve conversion rates and reduce user time trying to find the next page to visit.

Create new content that is more directly tied to what “Squeak” has to say. The clue that leads to this content asks the user to look at the video to see what “Squeak” says at a time marker in the video, but this doesn’t apply to the next step in the hunt, it is simply a way to encourage interaction with the video. This may confuse users tying to follow the clues and instructions. Adding another step to the hunt with a better connection to “Squeak” may improve user experience and sense of continuity.

Write a better clue. Combined with the fact that users may expect “Squeak’s” comments to be relevant to the next step in the hunt, the clue itself may simply be too vague.

Project: Content Experiments

The Hunt Continues

This week, I continued tracking the performance of the Analytics Scavenger Hunt and performed informal user testing to assess areas where content could be revised to improve user experience and goal conversion rates.

Informal User Testing

After analyzing week 8 data and project results, I contacted several participants who had completed or nearly completed the hunt. I was also fortunate to receive voluntary reporting from users to let me know about their experiences with the Scavenger Hunt.

Although most users reported that they felt the Scavenger Hunt was engaging and that they were able to progress through the steps with relative ease, there were a two main areas of concern identified.

  1. Submitting a comment upon completion of the hunt.
  2. Progressing from the “Squeak the Squirrel” post to the next page in the sequence.

I also noted during my initial data collecting that these two steps were not performing as well as the rest of the scavenger hunt.

Submitting Comments

The instructions for the Scavenger Hunt include posting a comment on the landing page–a blog post that marks the start of the exercise.


After analyzing the first wave of analytics gathered,  I noted that although there was a good showing of users completing the entire course of web pages (and in the correct order), they were not returning to complete the final task in the instructions. In fact, in the first round of testing, only 1 user out of 5 completing the entire sequence returned to post a comment.

Experiment 1

Goal: Improve Commenting at Completion

The first change I made to improve goal completion, was adding a link to the word ‘comment’ on the last page of the scavenger hunt. The first version I ran didn’t contain a link here.


Then, I created a variant with a much more noticeable comment button, to see which version performed better in my content experiment.


After setting up my initial experiment to improve commenting, I sent out invitations to a new wave of my contacts. Commenting has improved tremendously since implementing this test.

After implementing these changes, I continued to do follow up with some users who have made it through the full scavenger hunt to collect feedback. In doing this, I discovered that the button in the variant page is not working for Firefox users.


In addition to discovering the Firefox compatibility issue, 2 users expressed having difficulty progressing from the “Squeak the Squirrel” post to the next page in the sequence.

I had anticipated that this part of the hunt might become a drop off point for some users. This is the most complicated task of the hunt, in part due to the amount of content on this blog post, but also because there are several steps needed to complete and progress.


After scrolling to the bottom, users find Clue 4.


Some users were able to take the information in the clue and progress to the next task, but reported that it took longer than they expected and some also reported dropping off at this point.

After receiving this user feedback, I decided to run another experiment in order to improve the experience and progression to the next task.

Experiment 2

Goal: Improve Clarity of Path from “Squeak” to Finish Line

For this experiment, I left the original page as shown above and created a variant to test performance. I made simple changes to help give better clarity about where users should travel to next.


Comparison Tracking

In order to compare the success and/or discrepancies between different data collection methods, I set up several new destination goals in GA to compare to the content experiment results.


These goals are being tracked independently from the experiments and are set up to monitor traffic/conversions between all possible page variations combinations used in the experiments.

Because time was shorter for Experiment 2, I was particularly interested in being able to monitor real time activity, in case further adjustments to the content might be needed.

Minor Implementation Issues

The process of setting up the experiments went relatively smoothly except for my initial attempt while using the Google Analytics Content Experiments Plugin. Unfortunately, when trying to verify and launch my experiments, I received error messages regarding the placement of the experiment code on the original page when using the plugin. After several attempts, I decided to add the code into the header manually rather than use the plugin. After I did this the error message cleared for the original page and I launched the experiment.

Next Steps

I invited another wave of contacts to participate over the last few days of testing. I will continue informal conversation about site performance as needed, but expect to spend the remainder of the experiment monitoring the data as it comes in.

Project: Goals and Funnels

For this project, I created a scavenger hunt on this user journal with clues for users to follow in order to visit and scroll down 5 pages. The pages are in a variety of formats, some blog posts, others pages reachable from top navigation. Content is varied as well, including images, video, and a link to an online game. All clues are within one click away from the answer.


Set Up

Google Analytics

I set up a destination goal using Google Analytics to track user progress onto the landing page (the blog post with instructions shown above) of the scavenger hunt.

I also created a funnel with the order of pages users would take if following the clues sequentially to the finish. I chose not to make the funnel required, because users may take different routes, or become interested in other content along the way. Reaching all of the pages, not necessarily in order, is more relevant to completion.

WP GA Events

To compensate, in part for not making the funnel required, and in part due to my own fascination, I used WP GA Events to add a scrolling event to each page. The id for the scroll is located in the clue  in order to observe how many users scrolled to the clues on the page. In addition, I added a click event to a comment link on the final page to see how many users would return and comment from the link.


Once I created all of the content, coded scroll/click events, and set up the goal/funnel, I sent invitations to some of my personal contacts to drive traffic to the landing page.

Initial Findings

Scroll Events

Below are the top scroll events ranked in order.The events are ordered by clue number (clue 1 is on the landing page).



From the order, I can already see that pages with clues 4 and 3 show more engagement levels than others, which may be due to users returning to the page more than once. What is interesting is that there are 19 events tracked on the landing page, but only 6 on the page with the final clue.

Goal Completions

At this point, there have been 9 destination goal completions tracked to the landing page.  Five of the total originate from Facebook where I invited personal contact with a direct link to the landing page.


Of the 9 total conversions, all but one entered the site via the landing page, which is what I anticipated. The 8 direct entrances are consistent with my main traffic source via the link I sent to contacts.


Of these 8 conversions, only 2 continued on to the next page in the scavenger hunt to view clue 2.


At this point, they are joined by 3 users arriving from the home page of the blog who are traveling along to the contact page and clue 2 as well.


There are 0 drop off from this group of 5 until they begin to reach the end of the course of scavenger hunt clues. At that point 2 drop off and 3 continue to travel the site. As you can see from the images 1 is returning to the ‘play the game’ page again.


Others diverged from the path slightly and are still completing the hunt at this point. In fact, it appears that at least 2 users are going through the pages a second time.


All users of these users drop off by the 11th interaction on the site (final pages were Contact Us and Play the Game).


This users flow through the site supports my earlier assumption that some pages are receiving higher scroll event results due to return visits to popular pages.

Conclusion and Next Steps

I’m pleased with the initial data collection results. They are rich and varied, which give me the opportunity to examine how different measurement methods reflect user behaviors.

One part of the hunt that I didn’t track at this stage was leaving a comment upon completion of the hunt. Only one user completed this task, which may be due to the fact that it requires returning to the landing page to enter the comment (or that users don’t want to leave a comment).

Moving on to the next phase of our project, I plan to take a look at the content and organization to determine what improvements could be made to improve overall results, but also improving ease in commenting at completion.

Analytics Scavenger Hunt

Welcome to the Analytics Scavenger Hunt!


  1. There are several items to find on this hunt. They are all located somewhere on this site.
  2. Follow the clues. All items are only one click away from their clue.
  3. When you are finished submit a comment on this post and tell us how many of the items you found.

Clue 1

Sometimes the way to find the best starting point is to contact the source.

Adding the Analytics

Project Summary

I selected this User Science Journal blog ( for my first analytics experiment, because I have the ability to add analytics and make changes at will.

Adding the Analytics

Adding analytics to this WordPress blog was relatively simple process. All that was needed was adding the scripts generated by Google Analytics to the site’s header, which is located in the settings. Scripts can also be added to the header of individual pages that are being tracked.

Pre-evaluation Statements

I made the following assumptions about this site prior to data collection with Google Analytics and will explore whether they are supported by the data in this article.

  1. The majority of users hail from the U.S. and more locally the Seattle area.
  2. The majority of users will enter the site from the home page.
  3. The home page is the most popular, because it is a blog and all of the content is available in a continuous scroll.

Preliminary Findings from Data Collection

Overview of all sessions on this site during the collection period Aug 1 to Aug 7.


The majority of users hail from the U.S. and more locally the Seattle area.

As predicted the majority of the users were from the U.S. It is interesting to note, however, that there were international users from both Iraq and India.

GA-by country

Also as predicted the majority of users were located in the Seattle area or in nearby northwest.locales. It was surprising that the 2nd most common “city” was from users not sharing city location at all.

GA-by city

The majority of users will enter the site from the home page.

The home page is the most likely entry point or landing page for users and this was supported by the data collection. Out 18 total sessions, 13 began on the home page,

GA-landing page

The home page is the most popular, because it is a blog and all of the content is available in a continuous scroll.

As well as being the most common landing page, the “User Science Journal” home page was also the most popular among users with 34 page views out of 46 total views. Users spent an average time of 01:54 on this page.

GA-by page title

Out of Curiosity

I also examined data grouping channel types. The majority of traffic came from a direct source, which would be consistent with my contacts who clicked through a link I sent to them via email. Others came via a link I shared on social media (Facebook).

ga-traffic sources

The remainder came from referral sources I didn’t recognize. Here is a more detailed view.

ga-by traffic source

Preliminary Conclusions

My preliminary findings, using the data collected with Google Analytics, support all pre-evaluation statements.

I also discovered that, although the majority of traffic came from sources I generated through my personal contacts, 16.67% of traffic was referred by other sources.

Project: “Squeak the Squirrel” with Subtitles

Project: “Squeak the Squirrel” with Subtitles

Published 1957
Usage Public Domain
Subtitles Transcribed July 31, 2016

This video shows how a gold-mantled ground squirrel at Crater Lake National Park has learned to solve problems connected with getting food. It illustrates how an animal can learn to find food that is hidden from view, or is out of reach through a series of exercises and with many peanuts. To license this film and get a higher quality version for broadcast/film purposes, contact A/V Geeks LLC.

Run time 10:00
Production Company Churchill – Wexler Films
Audio/Visual sound, color

Film downloaded from:

(Did you read what squeak had to say? Scroll to bottom for next scavenger hunt clue)

Challenges and Surprises

Overall this was a very enjoyable project. The most challenging part was selecting an appropriate video. Many that I was drawn to initially didn’t have much content to transcribe, or had long periods with only instrumental music. It took some time to watch through several selections before I hit upon “Squeak the Squirrel”.

I was surprised by how interesting I found it to sync the subtitles with the action in the scene. Also, being sure to display the subtitles long enough to comfortably read, but not so long that it was a distraction from the scene. I watched it through several times just to add or remove a few seconds from the subtitles in order to improve the flow.

Project Workload

It took longer than I anticipated to complete the subtitles based on estimations expressed in our assignment video, which suggested about 20 minutes for a 5 minute video. The video I selected was 10 minutes long, but took several hours to properly transcribe. This may be due in part to my own inexperience and also the fact that it is a documentary with more spoken content to transcribe than other types of video.

YouTube Tools

YouTube didn’t generate subtitles for the first film that I selected. This was likely due to poor sound quality on the recording, or because there wasn’t any spoken dialogue for the first minute.

For “Squeak”, the auto generated subtitles worked, but it did require a lot of editing to make them into grammatical sentences. That said, having them as a framework made it much easier than transcribing from whole cloth.

The interface provided by YouTube was relatively simple to use. Being able to see the captions in a strip along the bottom of the video, was extremely helpful. It made syncing the text with the scenes much easier.


I’m satisfied with the outcome of this project and look forward to doing more like it. There is definitely an art transcribing, especially in cases where there are sound effects, or music that must be interpreted. I plan to continue acquiring more skills like this that improve user experience and access to media.

Stretch Goals

For a stretch goal, I downloaded a copy of the subtitles and shared them with the members of the I posted a link to the file in the comments for the film.

It can be found here:

Clue 4

You have almost finished the hunt. The next item is a 4 digit number. To find it you’ll have to play a game.