Keeping Up With Data #58
Data is a strategic asset and should be treated as such. Companies should be maximising its economic value and diligently managing investment into data. These were some of the main messages of my recent talk and an article. The question I got from many people is how to evaluate the (economic) impact of data and analytics? The short answer imho lies in data-driven decision making. That’s right, even data teams should be data driven.
And for a strategic asset to have impact, there needs to be a strategic executive, smart prioritisation, and sometimes even a 2.5-bilion parameter model.
- The missing analytics executive: Increasingly more data professionals are realising that for data to have impact, someone needs to represent it at the very top — at a board level. CDO role is still rather new and in many cases very much operational, tactical at best. I’ve always thought that the breakthrough is in making the CDO more strategic and lifting its status on the corporate ladder. But Benn is suggesting parachuting a chief analytics officer (without operational duties of a CDO) behind the doors of a boardroom instead. Much like distinguishing the roles of a strategic CTO and operational VP of engineering. (Benn Stancil)
- Data Advantage Matrix: A New Way to Think About Data Strategy: I’m always preaching that a data strategy needs to be aligned to the business strategy. And it is in my view dangerous to treat a data strategy as a collection of data projects. However, we often need to prioritise, what to do first. And that’s when Prukalpa’s prioritisation matrix can be helpful. She also says that data projects shouldn’t be prioritised from an ROI lens, but instead an “advantage” lens. Sort of ‘tomorrow’s thinking today’. Maximise the ROI now, but also make sure you’re still relevant in five years. (Prukalpa @ TDS)
- Turing Bletchley: A Universal Image Language Representation model by Microsoft: Microsoft introduced a 2.5-billion parameter Universal Image Language Representation model few weeks ago. The model is targeting two concepts: (1) language and vision being inherently linked (when we hear a sentence, we imagine it, and vice versa); and (2) vision is a global modality (so an image can be described in any language). The result is a “one-of-a-kind universal multi-modal model that understands images and text across 94 different languages.” It enable for instance searching for images in any of the languages (and even using their combination) or finding semantically similar images. Cool, right? (Microsoft Research Blog)
Talking to people in the US celebrating Thanksgiving, seeing South Africans getting ready for their summer holidays / Christmas combo, and trying to park in a totally full supermarket parking lot in Zurich today all signals that the end of year is approaching quickly. So, let’s finish the year strong!