This is part 3 of a project using the Analytics Scavenger Hunt on this user journal. There are clues for users to follow in order to visit and scroll down 5 pages. The pages are in a variety of formats, some blog posts, others pages reachable from top navigation. Content is varied as well, including images, video, and a link to an online game. All clues are within one click away from the answer.
I tracked a combination of events and destination goals prior to launching content experiments and added several additional to run in tandem with the experiments.
Implementation: I set up the goals using Google Analytics and Events using the WP GA Events plugin.
Goals Part 1
- Scavenger Hunt: This was an early destination goal to track the number of users click a link that I promoted to the landing/instruction page of the hunt.
- Hunt Comment Funnel: This is the first funnel set up to track progression from landing page through all five subsequent pages.
Associated Events: I set up a series of scrolling events to track user activity on pages where they needed to scroll to read a clue. These are the same page tracked in the hunt comment funnel goal.
Reasoning: When I set up these goals and events I was primarily interested in analyzing traffic patterns and identifying areas where drop-offs occurred during the course of the hunt.
Goals Part 2
- Comment: This is a destination goal that tracks user progression from the final clue page to the comment entry box on the landing page. It is the original page in content experiment 1.
- Comment Var2: This destination goal is a variation page created for the content experiment 1.
Associated Events: There are two click events, one for each variation, to track when the user clicks the link to go to the comment page.
Reasoning: Posting a comment is the final task of the Scavenger Hunt. I wanted to track how many users where arriving to the comment page and also, whether they were using the links provided, or other navigation methods.
Goals Part 3
- Hunt Squeak to Game: This destination goal set was created to measure whether users were progressing from the “Squeak the Squirrel” page to the the next page in the sequence. The series below replaced this goal once I began running content experiment 2.
- Hunt Squeak Var1 to Game Var1: The following destination goals were created along with content experiment 2 in order to track completions for each possible combination of variation pages.
- Hunt Squeak Var1 to Game Var2
- Hunt Squeak Var2 to Game Var1
- Hunt Squeak Var2 to Game Var2
Associated Events: There is one scroll event to track the total number of users to scroll to the bottom of the page to read the clue.
Reasoning: Initially, I set this goal, because I suspected this page be more likely to have drop-offs then others in the course of the hunt due to the amount of content and the complexity of the tasks needed to progress to the next page.
Goal: Improve Commenting at Completion
I set up a content experiment in Google Analytics to determine which of two pages produced more conversions.
The original page in this experiment had a link to the comments in the text of the “Success” message.
In the variant I added a large button to the comment page below the “Success” message.
Firefox Fix for Variant
Shortly after beginning this experiment, my informal user testing revealed that the button was not working for Firefox users. In order to continue the experiment with this variant running, I chose to add new content at the bottom to make it more accessible.
I also added a click event to the the new Firefox link to see how many users used this link to comment.
Results and Observations
After 6 days of data collection, experiment 1 is showing that the original page is outperforming the variant. However, the variant was only delivered to 14 sessions while the original had 30 opportunities to run.
For this experiment, it has to be taken into account that at least 2 Firefox users contacted me to report difficulty with a non-functioning button. Their feedback precipitated my adjustment to the content to aid Firefox users. Both of the users who contacted me did convert to to the comment page, but they had to click off this page to get there.
I had expected the variant to be more successful than the original. The time running without the final revision may have contributed to lower performance.
Comparison Goal Tracking
Findings from the event tracking of the variation page are consistent with these results, with 1 click on the button and 1 scroll event to the bottom of the page recorded, plus 1 destination goal conversion.
At this time, it doesn’t appear that the Firefox link added to the Variant has been clicked.
Overall improved rates of commenting.
Even though the content experiment only shows that there were 6 conversions, 17 new comments were posted during the experiment. That means that 11 users took a different route to the finish.
At least 2 of these new comments came from Firefox users (from prior to added Firefox link) who used a different path back to the comment page. That leaves 9 users who didn’t take the predicted path to the comments.
Next steps for improvement.
It would be best to continue running the experiment for another week in order to collect more data, however the timetable for the project doesn’t allow for that much extra time.
Therefore, what can be learned from data and observations at this time? Considering that 11 out of 17 users took a path different than predicted, it would be wise to reconsider the task itself. Traveling back to the landing page after moving 5 or more pages away, may be more problematic than whether a link or button is noticeable.
If this project were to continue, the final page of the hunt should be a page where comments can be submitted without moving away to yet another page. Perhaps, adding a comment form to the online game rather than the blog would be even better.
Goal: Improve Clarity of Path from “Squeak” to Finish Line
After informal user testing, I received consistent feedback that this was the most difficult step in the hunt. I set up a content experiment to track two versions of “Squeak”.
Many users felt the original version didn’t give enough information about how to locate the next clue.
In the variant, I added bold to the most relevant information in the clue and also a hint to help users find the link to the game in the top navigation.
Results and Observations
After 4 days of data collection, results were not very enlightening. Out of 8 sessions tracked in the experiment only 1 user was delivered the variant. All others received the original page.
Ideally, more time is needed to best evaluate whether one page is performing better. As in the previous experiment, I expected the variant to perform better.
Difficult yet charming.
Although, this experiment appears to show the original content as the winner, many users have given feedback stating otherwise.
Amusingly, even though users find it to be challenging, “squeak” has charmed many who expressed that the video was their favorite part of the experience.
Next steps for improvement.
As in the previous experiment, if the timetable for the project allowed for an additional week of data gathering, it would be helpful. Since this is not possible, based on the data available and clear user feedback, this content should be revised to make the progression between steps easier for the user.
Reduce the amount of scrolling. Positioning the clue closer to the video might improve conversion rates and reduce user time trying to find the next page to visit.
Create new content that is more directly tied to what “Squeak” has to say. The clue that leads to this content asks the user to look at the video to see what “Squeak” says at a time marker in the video, but this doesn’t apply to the next step in the hunt, it is simply a way to encourage interaction with the video. This may confuse users tying to follow the clues and instructions. Adding another step to the hunt with a better connection to “Squeak” may improve user experience and sense of continuity.
Write a better clue. Combined with the fact that users may expect “Squeak’s” comments to be relevant to the next step in the hunt, the clue itself may simply be too vague.