How did INFORMS 2011 Change your Perspective on Old Problems?
One of the tricky aspects of INFORMS is knowledge management. After having absorbed the intellectual prowess of the brightest intellects in our field, one is left wondering about retention. Specific knowledge is buried rapidly under new knowledge. Quick quiz: can you remember three pertinent facts from the 8AM Monday session? It can be a bit disheartening.
However, what I appreciate about INFORMS is the cumulative effect of many different talks on one’s framework for approaching problems. I will give an example from my own experience and hope to hear yours as well. I am a member of the Technology Management Society (Shameless JOIN NOW plug) and attended many talks on innovation, entrepreneurship, new product development and technology evolution. I was pleased at how much my perspective was broadened.
Consider the classic problem of trying to find improvements to an existing solution, specialized to an innovation context. Some sort of mechanism (say, user-generated ideas through open innovation) provides you with a pile of potential improvements to your existing product or service. How can one sort through those improvements to find optimal (or satisficing) solutions? If you are trained in optimization, the answer seems simple: construct an objective function, note any constraints, and use an algorithm to find solutions.
What the conference helped me realize is how the innovation context creates a lot of interesting variations on traditional approaches, and how limited my prior perspective was.
Quick cautions: I am intentionally highlighting a problem that I am not working on so as to avoid self-promotion, but as such I am not an expert on all areas. This blog is being written to create discussion and ideas rather than serve as a complete (or accurate!) research document. Essentially, I am experimenting with using a blog format as a vehicle for addressing half-baked ideas, as I mentioned yesterday. Also, brevity.
First, in an innovation context, ideas must be judged by an authority (or authorities) to be cited as good, which takes time. As such, it may be more important to rapidly discard poor ideas, so authorities can judge only on ideas that have a higher probability of being improvements. The approach of discarding infeasible ideas vs. searching for feasible ideas is the same in many contexts…but it may depend on a larger debate on whether good ideas exist in a vacuum or are evolving in real time.
Second, the judgment of the authority itself can be perfect at the moment it is given, but due to fluctuations in markets and technology, firm capabilities, and other contextual factors, an option that once was an improvement is such no longer. Essentially, Type I and Type II errors may become correct decisions, and vice versa.
Third, the nature of the relationship between ideas that are actual improvements and the sources (users who provided the ideas) becomes important. Do improvements only come from a limited number of skilled users, or should the firm cast a wider net to include all sources? Essentially, it’s an explore-exploit trade-off, made more complicated by network effects of ideas and users.
Fourth, user learning complicates the system. For example, given enough feedback from the authority, will users improve the quality of their ideas? But, that feedback takes time or accuracy away from judging. Thus, should users judge each other?
Some of these factors I once knew, or had seen in other contexts. Others were completely new to me. I apologize for any descriptive errors. I attempted to minimize overly technical explanations and citations. But I did this little exercise to start the conversation that I hope continues after we all leave Charlotte. I understand not all will want to share their new insights publicly in a comment, but I hope that you too can think of how many ways in how these talks have improved your perspective. Thanks for an excellent conference, and I look forward to further conversations via Twitter, email, and blogs.