Monthly Archives: April 2017

2 Comments

In a Q&A session at a student run event a couple of years ago I was asked to answer the question "Name one thing you're really good at".  Most of the audience knew me, either in person, or knew about my reputation/job titles.  I assumed they were expecting me to name a skill that has propelled me through my career.  It was not a question I was anticipating or was particularly prepared for - however in a public "lightbulb moment" I answered that I was good at "Failing".  The room went quiet.  You could hear a pin drop as my audience started to process my one word answer.  I let that word sink in for a moment and then elaborated.

Firstly I discussed the fact that I talk to all sorts of students in all sorts of situations.  Some have just "failed" a test or exam and are drawing all sorts of conclusions about what that implies.  The fact I am a Professor does not make me immune to "failure".  My own student transcript is full of high grades.  However my mark on my very first University test was definitely not in A+ territory.  If I had I let that define me life would have been very different!  I'd skipped first year University classes in a "direct entry" program and started University study at second year level.  I used the low mark on my my first test as fuel to figure out what it would would take to truly succeed in that environment.

Failure is another stepping stone to greatness.

The version of my CV which I would normally share when applying for a grant, promotion or an award lists a whole range of academic/professional successes - papers published, grants won, awards received.  However what most people don't get to see is the file folders of unfunded grant applications, the paper reviews where I could readily believe the reviewer must be referring to someone else's paper, or the award nomination material for awards that went to other deserving applicants.

The iceberg illusion

The successes on my CV are however built on a string of "failures".  Telling a group of students that I am good at failing was a statement about the resilience needed to pursue an academic career.  Being "good at failing" means that I've always made a point of learning everything I can from situations where the outcome may not have been defined as a perfect "success".  If success is an iceberg, then the failures that most people don't get to see are below the waterline - and are invisble to most people.  For some thoughts on creating a "CV of failures" check out this post on the GradLogic blog.

I'd encourage anyone in an academic environment to embrace failure!

“The Iceberg Illusion” illustration is by Sylvia Duckworth used under Creative Commons license.

Save

Save

Save

Save

This post is a guest post from Dr Andreas Kempa-Liehr, a data scientist who is one of the newest members of academic staff in the Department of Engineering Science.

Decisions under uncertainty

The only certain thing about the future is its uncertainty. Yet we are making decisions for the very next future, both in our private life and the business/engineering processes, we are responsible for. The enablers for these decisions are our very individual skills, which we have learned from interactions with our environment. This kind of knowledge can be interpreted as our very personal, intrinsic model of the environment, which we are using for solving problems. It comprises both our expectation of what is likely to happen and the understanding of how to achieve the desired outcome.

The problem is that people are not very good in making decisions under uncertainty, which might be boiled down to the following quote of Amos Tversky, who worked with Nobel-prize winner Daniel Kahneman [1] on the discovery of systematic cognitive biases:

“The evidence reported here and elsewhere indicates that both qualitative and quantitative assessments of uncertainty are not carried out in a logically coherent fashion, and one might be tempted to conclude that they should not be carried out at all.” [2]

Does this mean, that objective algorithms should be able to make better microdecisions? Yes, but for implementing them one needs a clear understanding on what the meaning of better is (Domain Expertise) in order to develop models for predicting the information needed for doing better (Data Science) and models for making decisions from the provided information (Operations Research). The critical part is the mathematical interface between predictive model and decision model, which should not be a single number of a predicted outcome (point estimate) but a probability for each possible outcome given the actual circumstances (conditional probability distribution). The important point is that conditional probability distributions allow to systematically take into account the uncertainty of the predictions such that cost-optimal decisions under uncertainty can be made.

Automating Micro-Decisions

Have a look at the following slide, which has been captured from a presentation of M. Michaelis given at the 4th Big Data & Analytics Congress [3]. It shows the out-of-stock rate of 10 stores, which had their replenishment processes being switched to a data driven approach based on conditional probability distributions for expected sales. In the beginning the suggested replenishment orders could be altered by staff, but after a transition period the processes were switched to full automation. The slide is in German, but the diagram speaks for itself: It shows the plummeting of the out-of-stock rates after switching to fully automated replenishment orders.

 

 

References

[1] D. Kahneman. Thinking, Fast and Slow. Farrar, Straus and Giroux, New York, 2011.

[2] Amos Tversky and Derek J. Koehler. Support theory: A nonextensional representation of subjective probability. Psychological Review, 101(4):547–567, 1994.

[3] Mark Michaelis. Case Study Kaiser’s Tengelmann: Prognoseverfahren im Dispositionsumfeld.

Save

Save

Save

Save

Save