Monday, January 20, 2020

"How do entrepreneurs hone their pitches? Analyzing how pitch presentations develop in a technology commercialization competition"

In Spinuzzi et al.'s (2015) "How do entrepreneurs hone their pitches? Analyzing how pitch presentations develop in a technology commercialization competition," they studied the the  sixth  year of  the Korea-based Gyeonggi-do Innovation Program (GIP), a program jointly run by the IC² Institute. From what I can tell, the program brokers between emerging Korean entrepreneurs and U.S. markets. The program provides a service. It claims to teach firms to how to better market their products in a U.S. market, though possibly elsewhere abroad too. But the program also provides feasibilities studies, which the firms get in the form of the Quicklook® report. So the program sends market researchers out into the field, and these researchers determine whether the product has a market in the U.S. or not, and, if so, what that market might look like or consist in. Mentors and judges use these reports as a basis on which to form their judgements of the firm's pitches. So, the GIP got over 200 applicants that year (2013), 25 made it to the semi-finals of the competition, and Spinuzzi et al. (2015) study four of those semi-finalists.

Here's a bit on the GIP:
Such consortia, according to Gibson & Conceiçao, attempt to “shorten  learning  curves  and  reduce  errors”  while  “provid[ing] access  to  regional,  national,  and  international  markets,  resources, and  know-how” ([10]  p.745;  cf.  Park  et  al.  [19],  Sung  &  Gibson [30]). Such programs implicitly emphasize understanding markets and  developing  value  propositions  that  speak  to  the  needs  of  the catchers;    they    typically    provide    actual    market    feedback appropriate  for  the  market  dialogue  we  discussed  earlier.  For instance, GIP contractors research a target market, identifying and interviewing  potential  stakeholders,  then  writing  results  in  the form  of  what  Cornwell  calls  a Quicklook® ([20];  to  understand Quicklook® revisions, see Jakobs et al.[11]), a type of technology assessment  and  commercialization  report  that  articulates  market feedback.   But   when   they   help   entrepreneurs   formulate   their arguments and revise them to address market feedback and needs, programs  such  as  the  GIP  typically  provide  tacit,  context-based support rather than explicit, systematic support. At the GIP, pitch decks    and    associated    genres    are    described    in    templates; instructions on how to conduct the dialogue are conveyed through a  team  of  mentors  with  different  backgrounds,  specialties,  and experiences.  Furthermore,  programs  such  as  GIP  tend  to  take  on entrepreneurs  operating  in  many  different  sectors,  pitching  to markets    with    differing    regulatory    constraints,    competitive landscapes, business developments cycles, and margins; this wide variation makes it difficult to systematize pitch development, and consequently  the  training  process  emphasizes  contingencies  and draws  heavily  on  the  situated  judgment  of  mentors such  as trainers. (Spinuzzi et al., 2015, n. pg)

Per usual, Spinuzzi et al. (2015)  gather data in the form of interviews, artifacts, observations, and surveys. They asked three questions:
RQ1:  What  kinds  of  feedback  did presenters  receive  in  the Quicklook®reports and training? 
RQ2: What changes did they make to individual pitch arguments between training and final pitches? 
RQ3: Do these changes correspond with favorable judges' scores?
To answer RQ1, they found that, in general, judges offered three kinds of feedback: (a) structure, (b) claims and evidence, and (c) engagement. For example, (a) you should add in these three slides at these specific locations in order to better fit yourself into the genre of the pitch; (b) you should qualify your claims, since you're not as original as you think you are (in fact, you should move from the known to the unknown via a matrix that shows how your product matched up against analogous product already in circulation in the U.S.); and (c) you should work on selling the point in person, say, by rolling the film, rather than just showing a still (and bring your product up on stage, while you're at it).

To answer RQ2, some firms changed more than others. However, despite Spinuzzi et al's (2015) claim that "...we saw similarities in how they took up and addressed specific kinds
of feedback in structure, claims and evidence, and engagement" (p. n. pg), I don't really see the similarities, save for the fact that they are all making changes to the feedback categories (structure, claims and evidence, and engagement).

To answer RQ3, as Spinuzzi et al. (2015) themselves admit, it's hard to say. One of the four firms clearly implements the feedback, which results in a co-constructed claim between rhetor and audience (see London et. al, 2015)--"co-constructed" being a good thing. It means something happened. An action took place. But some firms didn't score that well, partly because they didn't implement the feedback, but also partly because they probably thought it was pointless to do so (since the Quicklook® report had identified that the product had no market in the U.S.). Hence we arrive at a motif that we know well from other studies of revision, or even disability studies. It's best to get feedback as early as possible; that way, the feedback will arrive at a point in the development in which change is more realistic. Relatedly, one of the things that Spinuzzi et al. (2015) find is, despite the fact that feedback can be leveled at design, use, or argument, in technology commercialization competition such as this, it's only really possible to change the argument. Despite this limitation, the mentors/judges did in fact make a good point when changed seemed futile. They suggested to one firm that, instead of individually marketing these composters to households, they could market the tech as a factory you could leave garbage at.

While Spinuzzi et al. (2015) never really come right out and say this, or while they don't say this in quite so many words, I think they don't think the competition was run very well. Firms got the market research too late, which was made worse by the fact that judges had the feedback but, by convention, didn't give the firms the feedback until after the initial judgement. Moreover, the consortia didn't scaffold for some of the behaviors wanted its competitors to exemplify: "...they  wanted  to  know  whether  the presenters  could present  compellingly  to  US  audiences—a  factor  that included facility  in  English,  but  also  included  a  general  ability  to  connect (that is, a subjective evaluation that was not further characterized by subcriteria)" (Spinuzzi et al., 2015, n. pg).

Spinuzzi et al. (2015) also recommend that the Innovation Program grade the competitors differently; that is, instead of collapsing all of the metrics into a single score (1-4), they recommend that they be rated on different categories such as claims, evidence, engagement, structure, and so on.

Just a quote: "...allow   them   to consider market  feedback and  begin  incorporating  it  intodraft presentations.  It could    also soften    innovators    to    trainer suggestions,  perhaps  making  teams  like  K6017  more  likely  to adopt them" (Spinuzzi et al., 2015, n. pg). Soften.

I had some questions, though. Why was the composition of the authors different? Why was a psychologist on the team (Keela S. Thomson)?

I was also confused about the coding. Did they start by coding the slide decks. Did the emergent categories emerge from an analysis of those artifacts alone? and then were the same categories applied to other artifacts or observations? Quote: "Next, we applied the codes to trainer’s feedback videos, then used the   codes   to   identify   related   feedback   in   the   corresponding Quicklooks.   These   two   data   sets   represent   feedback   that presenters   received   between   their   training   and   final   pitch presentations.   By   coding   them,   we   identified   feedback   that appeared to influence the final pitch" (Spinuzzi et al., 2015, n. pg).

No comments:

Post a Comment