Sunday, April 29, 2012

Another Summary Paper

Here's another exciting marketing summary paper I wrote for class.



A Summary of “Viewer Preference Segmentation and Viewing Choice Models for Network Television”

In 1990 broadcast television network advertising revenue was approximately $25.5 billion a year, with General Motors alone spending $598.4 million on network ad time (Rust et al. 1992).  As such, both television networks and advertisers have a common interest in understanding why people choose to watch television, and why they prefer one program over another. In the pursuit of this understanding, it is useful to group television consumers into segments if possible in order to make decisions based on the shared preferences of large amounts of people. Segments can assist networks and advertisers with the forecasting necessary in deciding who will watch what program. In this way, advertisers can spend their ad dollars more effectively, and networks can maximize profits by designing shows that the most valuable demographics will consistently want to watch.
The purpose of the paper is to design models that can address the following questions accurately:
1) Do identifiable segments exist?
2) How do viewers decide when to start and stop watching television?
3) How do consumers decide which television programs to watch (Rust et al. 1992)?
Previous research on this topic primarily falls into three general types.  The first type is “structuring viewing alternatives”, which attempts to group television programs according to their similarities differences.  These groups of programs can be defined a priori, without supporting data.  They can also be grouped according to scientifically gathered viewer data.  Both of these two methods must make the assumption that these homogenous program categories actually exist in the first place. The third method of grouping does not rely on this assumption, but uses multidimensional scaling in order to show the relative differences among different shows (Rust et al. 1992).
            The second type of previous research is “segmenting television viewers”, which is similar to the previous type in that viewer segments are often formed a priori from demographic data. Empirically derived segmentation on the other hand assumes that watching a television program gives a certain benefit to the consumer, and that similar programs will provide similar benefits (Rust et al. 1992).
            The third type of previous research is “viewing choice models”, which attempts to predict which shows consumers will watch. These models are the most difficult to construct and as a result are much less common than the other two types. One phenomenon that complicates matters and needs to be considered here is “audience flow”, which is the way previous conditions (such as TV on or off, or what the consumer was watching previously) affect consumers’ future choices (Rust et al. 1992).
Rust et al. (1992) contend that their model is superior to previous research because: it combines the three types of existing models into one comprehensive model, it estimates the preference functions in addition to simply identifying them, it is open to the possibility that people’s preferences can be different for reasons other than “simple socio-demographic differences”, and finally that their model includes the decision to watch television in addition to simply deciding which program to watch.
The Rust et. al (1992) model shows the relationship between programs using Nielsen viewer data and multi-dimensional scaling. They are arranged by preference of similar audiences rather than seeming to be similar programs. Sample data was retrieved from 11,501 viewers and tested against the model. The results showed various ways in which the actual data differed from assumptions often made a priori.
Once the characteristics of programs are defined, they can then be matched with the preferences found in viewers. The closer the program is to the preference shown via multi-dimensional scaling, the more likely the viewer will watch. The results showed that this was in fact the case, was statistically significant, and the model produced similar results when applied to another randomly selected section of the data (Rust et al. 1992).
Lastly, the model was used to predict when people will turn their televisions on or off in the first place.  It was found that different portions of the populace will watch television at different times. For example, the segment dubbed “Western” was less likely to watch on the weekend.
This model shows advantages over previous research, and provides a valuable tool for analyzing an important body of information.
References
Rust, Roland T., Wagner A. Kamakura, and Mark I. Alpert (1992), “Viewer Preference Segmentation and Viewing Choice Models for Network Television,” Journal of Advertising, 21 (March), 1-18.


Questions
1. In the Rust article, what assumptions did the Rust model make when segmenting television programs?  When segmenting television viewers?
2. In what ways would the Rust model apply or not apply if it was used to segment data on webpages in place of television shows?

Monday, April 16, 2012

St. Louis Cardinals Opening Day

Last Friday was the opening game for the St. Louis Cardinals. I work very close to Busch Stadium, so opening day was part of my life whether I wanted it to be or not.  Parking turns into an expensive nightmare on game days, so to begin with I took the train in the morning. It was a nice change of pace.


The day before the game the sidewalk right across the street from the ticket booths turned into a shanty town for scruffy people. I didn't really get how waiting across the street guaranteed them a spot in the actual line. Several of them were huddled around a fire pit thing burning near a bus stop enclosure which I was really unsure is even legal.


On my way to work the next morning a monster line had developed. 


Florescent-lit computer screen is the only view I have from my cube, but the office right next to me has a great view of the stadium.  It rained a ton before the game started, and I think it was delayed.  You can see them rolling up the giant white tarp on the field.


My friends were either going to the game or just going to watch it at the bars in the neighborhood, which is always a fun time. The rain really damped my enthusiasm for stopping by somewhere after work. I had no cell phone signal at all that day either, which I blame on the huge crowds. I ended up skipping that and going out with some friends later in the evening.  Hammerstone's in Soulard has bring your own mug night once a week, so I bounced over there instead.


Cavalia, a Cirque du Solei wannabe, is in town. I walk by the huge circus tent every day on the way to work.

Saturday, April 14, 2012

St. Louis War Propaganda


"I can't pronounce your queer foreign name, young'un. Nor your dog's either."


This is amusing war propaganda courtesy of the Missouri State History Museum.

Wednesday, April 11, 2012

Course Work

I am currently pursuing a master of marketing research degree, and through the course of the program I do a steady amount of writing for papers and projects of all shapes and sizes. I think it's a shame to put so much effort into writing something, turn it in, and then that work never sees the light of day. So I am going to start posting papers that I write here. Why the heck not?

One of the ongoing assignments that I'm working on now in my Advertising Research class is to write summaries on articles that are given as required reading during the course. Often they are scientific studies used to gain insight into some part of the advertising puzzle. After the summary we were asked to write two hypothetical questions that may be used in part to construct the questions used on our exams. Here goes.


A Summary of “The Impact of Content and Design Elements on Banner Advertising Click-through Rates”

            The first banner advertisement was introduced in 1994.  Since then internet advertising has developed into an entire industry, which generated approximately $7.2 billion in revenue by 2001 in the United States alone (Lohtia et al. 2003). About 35 percent of this was generated by banner advertisements.
            The study in this paper defines its goals as twofold.  First, it attempts to define what makes a good banner advertisement, and second it attempts to show the differences in what is effective between business to consumer (B2C) and business to business (B2B) advertisements (Lohtia et al. 2003).
            Despite the frequency of use and size of budget of this medium, much of the information regarding the success rate of banner ads comes from the industry itself in the form of unempirical reports. These reports make claims that banner ads improve branding, and that size, use of interactivity, and positioning of the ad have a further impact on branding effectiveness. What little scientific testing that does take place often has several weaknesses: the use of samples comprised of brands that are already well known, the use of small samples, and the use of volunteer respondents that are likely aware that they are participating in a study.  All of these factors can contribute to bias and can throw the generalization of findings into question (Lohtia et al. 2003).
            The study in this paper overcomes these shortfalls by using a large body of real data on advertisements, and by concentrating on what contributes to the usefulness of banner ads (Lohtia et al. 2003).
            The study focused on click through rate (CTR) as a dependent variable that can illustrate the effectiveness of a particular banner advertisement.  The context of the ad was also thought to be important, with different variables affecting a choice when the choice was either high or low involvement. Independent variables were defined as cognitive content, affective content, cognitive design, and affective design. Content was measured by the use of incentive on the cognitive side, and on the affective side it was measured as the use of emotional appeals. Cognitive design was measured by examining the interactivity level of the ad, while affect was accounted for by looking at the colors used in an ad. Lastly, the level of animation was deemed to be a factor that would likely affect the CTR of the advertisement (Lohtia et al. 2003).
The hypothesis was then applied to a sample of 10,000 banner ads from the real world.  Findings were numerous.  One interesting result was the revelation that the presence of incentives in banner ads is worse than ineffective.  While it has influence on the CTR of B2C advertisements, it has a negative effect on B2B ads. Animation had a negative effect on B2B but a positive one on B2C.  Both groups preferred a moderate amount of color in their banner ads (Lohtia et al. 2003).
This information will help web designers and banner advertisement producers to fine tune their strategies in order to increase their effectiveness when targeting both businesses and consumers.
  

References
Lohtia, Ritu, Naveen Donthu, and Edmund K. Hershberger (2003), “The Impact of Content and Design Elements on Banner Advertising Click-through Rates,” Journal of Advertising Research, (December), 410-418.

   
Questions
1. In the Lohtia article, click through rates and consumer opinion are two metrics used to show the effectiveness of banner ads.  What is another metric that could be used? 2. In the Lohtia article, why was the study done not a true scientific experiment?