Showing posts with label Opinion. Show all posts
Showing posts with label Opinion. Show all posts

Friday, 25 December 2020

This Pandemic is an Existential Crisis for Cinema

With not enough cinemas globally open to show the tentpole content, studios are electing to release on their own streaming services. This is precipitating an existential crisis for cinema. [blog.mindrocketnow.com]


During Lockdown i, you might remember that Trolls World Tour made a splash by bypassing theatres and being released on streaming services only. I remember because spending on ads on the side of buses was pretty much frozen, so pictures of those little blighters lingered for a long time. AMC took particular umbrage (possibly not at the bus posters, but at being disintermediated) and black-listed NBCUniversal. 


But Trolls World tour made $95M in 2 weeks, which compares favourably with its $90M production costs. It compares very favourably if you think that NBCU didn’t have to share any of its revenue with the theatre groups (hence AMC’s outrage). Predictably, this strategy has been repeated with other high profile movies.


Disney elected to stream Mulan on Disney+ for £20 premier access - not PPV but a one-off fee to access a premier subscription tier comprising of a single movie. It became Disney’s lowest grossing of its live action remakes to date, and won't recoup its $200M budget (yes, there were mitigating factors in the freezing of China-US relations which made this movie a cultural casualty). 


Christopher Nolan’s Tenet wasn’t released online, only in theatres, made £347M in the box office (against $205M cost). However, that box office will be shared with the theatres so WarnerMedia didn’t recoup its investment; it was also Christopher Nolan’s worst-performing movie to date. 


Mulan will be judged a success and Tenet a failure. The reason is that Mulan drove increased takeup of Disney+ (Disney won’t confirm how many people took up a Disney+ subscription and paid the premium just to watch Mulan). So the cost of the movie can be somewhat offset by a reduced cost of acquisition of the new Disney+ subscribers.


So it’s not surprising to me Wonder Woman 1984 will be released in cinemas and online on HBO Max today (Christmas Day 2020). Not only will WarnerMedia not have to share revenue, but it’ll drive more people to its OTT service (which is fourth in a field of three so really needs to catch up). What is surprising is that WarnerMedia is not intending to charge a premium for it. 


It’s conventional wisdom that even in a recession TV survives household budget cuts. TV subscription is a recurring expense, and going to the cinema is an infrequent treat. Pricing PPV as the latter rather than the former is problematic. Charging a content premium when your market is suffering from increasing unemployment is a hard sell. When the Premier League tried to charge £15 per match, fans revolted. The best riposte to this price gouging was from Newcastle United fans. Instead of paying the broadcaster, they donated the same money to a local food bank, raising over £20k. It’s not necessarily the amount, but the context.


At £20 per view, each movie will have to cost the same as Trolls to break even. The pricing looks much more compelling at £15 with a complementary digital download later, or £20 one-off upgrade to a premium subscription, as long as there’s more than just the one blockbuster. If there isn’t an upgrade fee, if the new title is adding to the value proposition for the service, then the decision is a no-brainer. 


If you’re Disney or WarnerMedia or Amazon or Netflix, the pandemic has served to increase your addressable market for your OTT service. However, it seems clear that this pandemic has hastened the demise of the multiplex. We’re not going back to a tentpole movie filling up every screen in a multiplex on opening night. Indeed, we haven’t had that in a decade. 

Cinema is no longer the premium venue to see premium content - that is now your own home. Instead, I can see chains like Everyman and The Prince Charles Cinema flourishing, spending less on screening rights, and spending more on creating an experience for cinema lovers. 


Merry Christmas everyone, and see you in 2021!

Friday, 11 December 2020

Effective not Efficient

 How do you know if what you did today moved the needle? Delivered value to the business? And if you don’t know, why did you do them? [blog.mindrocketnow.com]


The answer to this last question normally boils down to being too busy to stop to think. Making time for reflection ends up being prioritised below real work, because there’s no immediate output. The name of the game is to produce as much output as possible, because that’s what can be measured, put into annual appraisals, paid a salary against, so we assume it’s an accurate proxy for delivering value.


There’s a logical thread from optimising output:

  • To maximise output, we need to maximise utilisation;

    • Which means we need to eliminate anything that impedes delivering output, anything that isn’t writing code;

      • But we know that eliminating planning is bad, so instead, we do all our big planning up front;

        • And we know that plans fail because of lack of contextual knowledge when the plans are made, so we need to do big design before big planning;

          • To get the design right, we need to have clarity of the product we’re intending to put into market, so we need to put big market analysis and big product analysis before big design;

            • This is a fair bit of work to do up front, with varying skill sets, probably from different teams, so we’ll need the various resource holders to agree that this is the right thing to commit their resources to = organisational alignment;

              • To secure that commitment, we’ll need to prove that we’ve thought it through, and present a business case that associates output with investment;

                • This business case is important, so we’ll need to put some thought into it, so we’ll need a little planning, little design, little analysis, little resourcing, little commitment…

                  • This is getting a bit recursive now 


Let’s look at the opposite position, where we aren’t optimising output, but focusing on creating the best output (let’s assume it’s software).

  • We’ll probably have scrum teams with all the skills in-team to create successful, well-thought-out code;

  • Because all the skills are in-team, we’ll be more nimble, responding to changes in context quickly, so the code will probably take less time to complete;

  • We’ll probably be using DevOps techniques, so the code will have security, operability, quality baked in from the start;

But:

  • We still won’t know if we’ve delivered any value, or merely great code.


Both cases are failures, because both rely on the assumption that output is a good proxy for value. It isn’t.


I once transformed a software delivery capability within a company that was overly focused on cost control, and therefore big everything up front, into one that was able to deliver quality code reliably without a big front, and so much more cheaply. Once the spend rate and code quality were no longer concerns, we were able to focus on what delivered value. And as it turned out, none of the features being mooted actually moved the market. Our investment was better made into marketing and content, rather than further app changes. So I pivoted the team to the next market, with a clear conscience knowing that done = Done.

Building software is complicated. Building the right software is not complicated, but it is harder. It requires trust that your process will yield good output, so the focus can be on understanding which is the right output.

Friday, 4 December 2020

Media Tech Bites: Serving the Niche

Here’s my short take on a piece of media technology. This week I look at how the big broadcasters are ignoring niche at their peril. Do you agree? [blog.mindrocketnow.com]


It’s a curious phenomenon that whilst there’s more content being produced than ever, there’s less choice being exercised. In making a choice, we are a product of our unconscious biases. Default bias will guide us to choose the first in a long list of programmes, the pseudocertainty effect will guide us to choose the most familiar as the most risk-averse, projection bias will overestimate how much we’ll want to actually want to finish the series that we started, and the sunk cost fallacy will make us watch every episode until we get to the end. Because we just don’t want to incur the cognitive pain of choosing something new, we watch another episode of Friends.


The reason that we don’t choose something new, something eye-opening, something horizon-expanding is because content service providers are really bad at providing good options for us to choose from. Netflix is the market leader, so should be really good at this, and yet we ended up watching the latest Adam Sandler halloween movie for our family movie night. I assumed it was going to be bad, IMDB rated it to be bad, and yet we ended up watching it because it was a choice that we all settled on - perhaps it incurred the least amount of cumulative cognitive pain.


Technology should be enabling more choice. The incremental cost of distributing another VOD asset is negligible. The incremental cost of another broadcast channel is low enough that a “pop-up” broadcast over satellite is eminently economically viable. And costs will inevitably be driven down as broadcasters take advantage of economies of scale of the cloud. But the opposite is happening. 


Wouldn’t it be great if technology could take on some of the cognitive load. Rather than relying on sifting through irrelevant recommendations presented to me, wouldn’t it be better if there were some sort of artificial precognition that could be applied to content catalogues to find something to watch. The distinction between these two approaches is important.


We don’t need more big companies to suck more data about us so that they can better market their content to us, and sell our digital soul to everyone else. We need to take control of our choice, and we need tools to help us do so. I’m looking forward to the day when I can apply my emulated thought process, on technology that only I can access, to pre-sift the increasing tide of dross for me, so that I can spend my time watching the gems.

Tuesday, 1 December 2020

Remembering Quibi

The brief rise and precipitous fall of Quibi perhaps shows that the world doesn’t need yet another big streaming service. What can we learn? Do you agree? [blog.mindrocketnow.com]


As I write this, the polished https://quibi.com/news site still presents a list of achievements: a new series, reflected glory from the Emmys. The news ends in September 2020, and omits the last piece of big story; that it has closed after 6 months and $1.75B. Most startups fail, especially those in crowded markets, but this one has failed more thoroughly than most.


In a crowded market, you have to differentiate, and that differentiation should address a gap in the market. Digital media services have been trying, to varying degrees of lack of success, to combine the way people want to consume content with the way people want to communicate - streaming and social media. So it made sense that Quibi this is the gap that should focus on; being a social streaming service. Quibi focused on short form and mobile because that’s how people live their social media lives.


Analysts were impressed with the “mission to entertain, inform and inspire with fresh content from today’s top talent—one quick bite at a time”, as were investors. And a lot of investment was needed. Quibi wanted to differentiate itself from user-generated content by having high artistic and production values. There’s not a lot of this premium content to acquire, so Quibi had to produce the content itself. And premium content is expensive to create. Netflix spent $17.3B in 2020 so if you want to be in the same game, you’ll have to spend billions too. 


The current market has shown that people will pay for content. US research shows that people are willing to pay at least $10 to $20 per month for streaming services. Backers saw an addressable market of the size of YouTube’s 2B monthly active users. It would only take a fraction of a percent of that market with a regular subscription to pay back that investment manyfold. YouTube itself has 20M subscribers for its Premium service. At Quibi’s $4.99 per month this makes a healthy revenue stream. The business case seems straightforward.


Using this huge investment, they went all in on premium content. Quibi covered the production costs, unlike cinema where producers take the financial risk and then recoup from box office receipts. This enabled content creators to take risks. The creators seemed to respond with ideas that were really interesting, using the 10min cap on duration to spur creativity. 


Was it a casualty of the pandemic? Well, Disney+ and HBO Max managed to launch and sustain, so no. The pandemic didn’t stop it getting mind share of its target audience through buying influencers. The pandemic didn’t stop it from being downloaded from app stores.


I think Quibi got the market wrong. They thought they were "competing against free". I think they were competing against the pause button. Once you get underneath the gloss and money, I think that they were solving the wrong problem. This is hardly uncommon in startups. Most startups realise this when they start to run out of money, so they pivot away or fail. Quibi had too much money so they continued. Perhaps the story would have ended differently if they had the same depth of catalogue as Disney or WarnerMedia, but they didn’t have enough money for that. Ironically, they had simultaneously too much money to succeed, and too little money to be successful.


In a crowded market, you have to differentiate, and that differentiation should address a need in the market. Quibi addressed a gap, not a need, and that is ultimately why it failed.

Friday, 27 November 2020

Media Tech Bites: Is DVB now Irrelevant?

Here’s my short take on a piece of media technology. This week I look at a mature overripe technology, DVB. Do you agree? [blog.mindrocketnow.com]


20 years ago, when I first started out in this business, I learnt the DVB standards. This gave me a competitive advantage as a consultant; being confident with the detail down to the specification of the tables meant that I could confidently integrate broadcast systems. My last role was deeply technical as well, but the knowledge that was prized was how to use API wrappers in C++ code. Not once did I need to cast my memory back to DVB standards. For me personally, my hard-won DVB experience is now irrelevant.


DVB (and its family of standards) are still critical to broadcasters. It has fulfilled its promise of lower cost through interoperability. It’s been an objective success, and reaches millions of people daily. It’s not DVB that’s the problem, but broadcast. 


It’s becoming increasingly obvious that the future of broadcast is to be delivered over the internet. Yes, satellite has much larger footprint, yes terrestrial has fine-grained reach, but when I read the tech press, most of the arguments come from a place of wanting to continue to sweat assets that we spent so much time and effort building. None of that matters to people who consume these services. As I’ve often commented, people just want to watch TV without needing to worry about how they’re watching it.


So as broadcast becomes just another service over the internet, the skills needed to push this forward are the same as software delivery in any other industry. Just like all those pylons holding up TV transmitters around the country, my memorisation of DVB standards is now just another sunk cost.


Friday, 20 November 2020

Media Tech Bites: Blockchain’s Potential

Here’s my short take on a piece of media technology. This week I look at an immature maturing technology, blockchain. Do you agree? [blog.mindrocketnow.com]


As I’m sure you know, a blockchain is a way of recording information in a way that is (almost) impossible to cheat. It’s trivial to check, no one body (like a bank) guarantees any chain, so it’s eminently suited to industries that suffer from multiple intermediaries and opaque authorities, like finance. Or broadcast?


Blockchain is unparalleled in establishing trust across domains of ownership. Broadcast is all about passing assets across domains of ownership. If each media asset was accompanied by an immutable history, we’d know who contributed, how much, and therefore where royalties should be shared. We could see a technical history of all the corrections and transcoding over the workflow, and maybe change TV settings to compensate.


But it strikes me that blockchain isn’t the answer, because broadcast has already optimised the secured distribution of content. I don’t mean that content security is perfect because it’s far from. It’s optimised because the leakage of revenue isn’t significant enough to merit ripping out and replacing all the technology, legal and consumer investments that we’ve made.


So for broadcast, and for the time being, blockchain is still a solution looking for a problem.


Wednesday, 11 November 2020

The Problem with Data Driven Decisions

This week I look at the pitfalls of being data-driven. It seems simple, and logical, so why do we end up falling back on our gut feel? [blog.mindrocketnow.com]


Should I make my next investment decision based on objective data or taking a punt based on how I feel? Seems a straightforward answer, doesn’t it - who would want to be responsible for a significant financial decision that was made on a whim? (Kinda depends if it worked out…) But making a good data-driven decision is a lot harder than just intending to do so.


What are you measuring? This simple question unravels a chain of questions that turn out to each require careful thought. Let’s pretend we’re trying to figure out whether to spend money on a cool new app feature:

  • What are the business (or strategic) outcomes you’re trying to influence?

  • How does the app contribute to that outcome?

  • How does the feature improve the ability of the app to contribute to that outcome?

  • What metric can we put on that improvement?

  • How can we measure those metrics? 

  • Are those measurements allowed in our privacy guidelines?

  • What would be the impact on those quantities if we didn’t build the app?

  • Is there anything else we could do, at less cost, that would improve that metric, even in part?

  • Is there anything else we could do, at same cost, that would improve an alternative, more important metric?


Most of the time, we don’t have a good answer to all of those questions, oftentimes because to do so thoroughly will take a lot of effort, more than the effort to ship the feature itself. So we end up making a best guess - aka taking a punt.


I’m really attracted to the concept of lean development, which codified learning by doing. The idea is to ship features as frequently as possible, measure impact upon metrics, and improve or discard depending on whether it’s a positive or negative impact. By keeping this feedback loop really short, we risk less wasted development effort. As a side-effect, we maintain focus on the metrics that matter to us, and maintain a cadence of shipping features.


As before, it depends upon the metric. If we optimise to a vanity metric (one that doesn’t align with a business objective) or a proxy metric (one that doesn’t align well with a business objective), then we might miss the business objective. We might optimise for a very fast playback start but miss the fact that consumers are much more worried about not finding programmes that they like.


After all that, you might be thinking that I advise avoiding making gut feel decisions. You’d be right - gut feel is basically your thought heuristics kicking in, reinforcing all your unconscious biases. Taking a moment to consider rather than react is a good rule. But that doesn’t mean you shouldn’t use your feelings. I’m also a strong believer in eating your own cooking. 


We should be building apps for people, and we need to understand people in detail if the app is going to make a difference to them. The person you get to see the most is yourself, so you should be able to analyse your own reactions to your app in the best detail.


Be data driven but guided by context. The best context is you.


Tuesday, 10 November 2020

Media Tech Bites: Importance of Metadata

Here’s my short take on a piece of media technology. This week I look at why metadata is the most important part of media tech. Do you agree? [blog.mindrocketnow.com]


Broadcast is in the middle of a technology revolution. 20 years ago, I was cabling routers to encoders, making sure that the labels were correct. Now I’m working with engineers to ensure the virtual routing through our AWS services is working as it should, and feeding iOS and Android apps properly.


Not everything is changing. Then and now, we use metadata to describe the services being delivered. In the world of DVB then, the metadata was in the main the System Information tables that told the Set Top Boxes where to find the broadcast services in the frequency spectrum. Now, as then, the metadata drives content search and discovery. However now, unlike then, metadata drives a far richer discovery experience; we can see trailers where we used to see thumbnails, we can link to IMDB articles where before we had a character-limited synopsis, and we can select the next programme based on what we’ve just watched or even what mood we’re in.


More fundamentally, metadata is starting to drive how the services are created. The cabling and labels are abstracted from the creation of services by software drivers. Middleware orchestrates those drivers into services. Those services can be orchestrated together in very visual “low code” ways. They can be defined in terms of metadata alone. This means UI and UX designers can put together interesting app experiences without needing to know Kotlin or Swift. It will mean that experiences can be put together programmatically, such as a UI that is reactive to the content that you’re watching, or the context that you’re watching it in.


If this all seems familiar, it’s because the enterprise IT sector went through this change last decade. It’s experience is that metadata drives the creation of the service, the consumption of the service, and the quality assurance of the service. To stay relevant, it’ll be necessary for all of us to be metadata-literate.


Sunday, 25 October 2020

Whataboutism

I’m on a US TV diet of shrill news shows and late night chat shows. I’m saddened by what it says about us all. [blog.mindrocketnow.com]


Where to begin.


Perhaps it’s the conflation of what is important with what captures attention. This causes Presidents to think that retweeting insane conspiracy theories is acceptable. It creates an environment where enormously rich companies get even richer because they give a voice to everyone, regardless of whether that person causes harm with that voice. 


This false conflation causes noisy trivia to become important because the news is obliged to report on it just because someone important said it. It enables important people to obfuscate their inadequacies by promoting nonsensical bluster. It empowers everyone to retweet opinions without checking if it’s real, or even without actually believing it, because you’re copying a role model. It polarises everyone and everything because you are forced to have an opinion, because the news shows are forced to report on the bullshit that emanates from people in power, which they do gleefully and in an ever more shrill way, just to capture your attention.


This dangerous conflation causes vitally important debate to be lost because it’s dropped off the current news cycle. Climate change hasn’t gone away just because there’s an election in the US. Institutional racism in the West hasn’t been eradicated just because there’s a large rally. The Covid pandemic isn’t over because we’re tired of being afraid of it. Trying to keep what’s really important straight in your own mind is mentally exhausting.


When I look at the news, all I see is responsibility to speak with honesty and integrity is abdicated. Apparently, it’s up to us to decide if that tweet is true, false, nonsense or criminal, not the person who posts it. Apparently wearing a mask is not a sign of how much you care about the people around you, but a sign of your political affiliation. Apparently our trust in institutions is easy to squander, because we’re all too happy to reinvest.


And if this behaviour is called out, it’s too easy to deflect by asking “what about you and your shortcomings?”. I’m a collection of contradictions, they’re very obvious for all to see. It’s an impossibly high standard to need to fix all of my inadequacies before I critically evaluate the crap that comes into my news feed. Whataboutism, or its more cowardly sibling, Iwasonlyjoking, is not moral equivalency, and doesn’t empower elected officials’ bile. Checking “I consent” doesn’t empower tech giants to feed me bile.


There is a difference between free speech and inciting hatred. It is possible to both recognise that Black Lives Matter, and be proud of how we can all live together. It is possible to say something and mean it.


--

If you’ve gotten this far into my post and haven’t started haranguing me in the comments, thank you for letting me get this off my chest. Normal service will resume in my next post.


Tuesday, 21 July 2020

My road-tested GTD setup using Todoist

A key part of my working from home habit is the Getting Things Done technique. This is how I’ve set it up. How does it differ from your setup? [blog.mindrocketnow.com]


Getting Things Done has revolutionised my productivity. I’m now confident I know what to do and when. More importantly, I’m confident when I don’t need to do anything, so I can enjoy my down time. This post isn’t to extol the virtues of GTD, but to get into the detail of how I’ve implemented it, as the GTD technique leaves the implementation to individual preference. 


Everything is paperless for me (at least as much as I can), so my GTD implementation is totally electronic also. I’m omnivorous in which technology I embrace, so I need my GTD setup to work across Echo devices, Android tablets, iPhones, Apple Watch and on the web. My primary tool of choice is Todoist, and is available across all these platforms.


Capture and Clarify

The first element of a GTD setup is Capture.  I’ve tried to reduce my capture channels as much as possible so that I can be confident that nothing gets lost as it’s passed to me, which is why I funnel everything into my Todoist inbox. All my multiple email addresses are now aggregated into Spark (app only, no desktop client), and anything that needs actioning gets sent to my Todoist inbox. I use Feedly to aggregate my RSS feeds, and anything that needs actioning, or perhaps some dedicated time for reading, also gets sent into my Todoist inbox. Finally, I still have some paper that comes my way, which goes into my IRL inbox on my desk. As this gets processed, I transcribe them into tasks that into my Todoist inbox. 


Inevitably, there are many more capture channels than can be aggregated into Spark, Feedly or my inbox. A couple have integrations with Todoist: Slack, my work Outlook email. Most do not have integrations: LinkedIn messages, WhatsApp messages, my notebook, voicemail, the table by the door, my wallet, telling me stuff - these need to be transcribed into my Todoist inbox.


Then comes processing the captured items. I take each item from my Todist inbox, top to bottom, and Clarify them by adding enough detail that they turn into actions, organise them into Projects, do the actions according to your Context (or what you feel up to doing next).


Sometimes clarifying needs additional contextual information, which GTD calls Project Support. I keep as much electronic as possible, in OneNote. I chose OneNote after a dalliance with Evernote because Microsoft seems disinterested in charging for a pro-level software, at least for now. Any that can be digitised, including photos and web clippings, here with titles that enable cross-referencing to the Todoist action. I also add the URL of the page as a comment to the task, to enable one-click linking of action to context.


But oftentimes you can’t get rid of physical stuff, so I also have an expandable folder comprising of clear plastic A4 wallets to hold this stuff. It’s important to process the stuff before putting into this folder, to avoid it becoming another capture channel, yet another thing to keep track of.


Do the easy tasks first

One of the powers of GTD is that it quickly filters out noise from your inbox by asking some simple questions. The first is: is it actionable? If it isn’t then it’s one of: poorly defined so you need to clarify it some more, not worth doing so you bin it, good to remember so you archive it (again in OneNote), or perhaps it needs to be done at a future date or just sometime in the future perhaps when there’s some more information. For actions to Incubate, Todoist enables adding time and date alarms and integrates with Google calendar so that you can diarise it. If you need to develop further information, that becomes an action into the inbox.


If it is actionable, GTD’s second question is to decide: if you can defer it (use the same technique as Incubating), if you can delegate it (email it with the supporting the OneNote web page if applicable), if it can be done in 2 min then do it now, or if you need to set some time aside to do it.


The 2 minute rule is probably the most powerful tip for me. 2 minute is all it takes to write an acknowledgement email, to look something up in archives, to schedule a meeting. However, it’s not long enough to compose an email with information content, or write something new for the archive, or write a good agenda for a meeting. So the 2 minute rule filters out all the low brainpower tasks which can be done immediately.


Get to work!

The first scan gets to the list of actions that require work to complete. The scan also familiarises yourself with the actions, so you’ll inevitably start prioritising them. Todoist has a day view which presents all the tasks with time dependency, and has tags to describe priority and other categorising features. GTD recommends you do the tasks according to your context. So if you’re at your computer, do your computer-based tasks. However, understanding your context is quite nuanced, and is the area that has seen the most change in my GTD workflow.


I’ve set up the priorities for be:

  1. Overriding priority, do it right now - used sparingly

  2. Impactful actions - stuff that makes an immediate difference

  3. Possible actions - things I can do right now, without waiting for someone else, or need to develop any new information

  4. Default


I’ve set up the context labels also: 

  • Someday and Soon to filter out the actions that need thinking about at a later date

  • Watch List and Reading to classify entertainment recommendations

  • Agenda For and Waiting For to capture the actions from other people that will enable me to do my action

  • Offline because some stuff needs actual physical labour rather than pressing of keys

  • Online because most of my world is online

  • Networking because I find this to be really hard, so put some special focus

  • Decision because some decisions need consideration even if I have all the information, and this gives me permission to take the time to think


My day is then organised by these tags. Todoist has a powerful query feature which enables me to create views that gives me my work list for each of my contexts. These are (together with the filter):

  • Urgent tasks (P1 or those requiring a Decision)

  • Impactful tasks (P2)

  • Next actions for work (P3 or above and in my work projects, but not in my reading or deferred lists)

  • Next actions for personal (P3 or above for actions for stuff for my family, so gets different focus)

  • Next actions for errands (P3 or above and in my home projects, but not in my reading or deferred lists)

  • Agenda for stuff to talk about with my wife

  • Agenda for others

  • Entertainment list (reading and watching)

  • Final long list for my Weekly Review


Future changes to the workflow

One of the features of work in lockdown is that pretty much all work is online and remote, so the online label no longer provides any help to contextualise better. Others have had success in using 10min, 1h and 1d labels, in order to guide how much effort that will be needed.


Todoist has a gamify element called Karma. You set a number of tasks to complete each day and each month and watch your progress up the Karma ranking. I’m not getting through the same number of tasks now as I did before, so I need to reflect on whether I’m capturing the right actions, or perhaps not binning enough.


However, these are tweaks. Todoist + integrations + GTD has served me extremely well over the last 2 years, and I recommend the approach to everyone.


Friday, 3 July 2020

Controlling the Algorithm

Can algorithms be racist? If they can, how can we control them? In this post, I look at the presentation of bigotry in technology and what we can do about it. [blog.mindrocketnow.com]


As part of my Python course, I’ve been learning machine learning techniques, or how bots recognise patterns in data sets by calculating correlations. In other words, how to create a decision-making algorithm. When I looked up from my keyboard, I began to notice that algorithms are getting some very bad press at the moment, which made me think a bit deeper. 


History of bad ideas

Facebook is suffering from more sponsors withholding ad spend. Despite Facebook claiming today “There is no profit to be had in content that is hateful”, the company still cannot stop big brands’ ads being placed next to racist posts. Big brands are responding in a way that’s eye-catching, by withholding ad spend. Eye-catching, but perhaps not ultimately effective as just 6% of Facebook’s revenue are from big brands. The remaining 94% consists of hundreds of thousands of businesses around the world, who cannot afford alternative methods of reaching their audience. The extraordinarily broad success of Facebook’s algorithm emboldens it to be blasé to both government and big business despite civic and corporate activism. And Facebook has some truth on its side, as its algorithm wasn’t designed to promote racism, so how can it be responsible for the racist outcomes?


Perhaps you remember Microsoft’s Tay bot, born in 2016. Within 16 hours it became a staple of future AI courses as a cautionary tale. Unwisely, Microsoft designed Tay to learn language from people on Twitter, as it also learned their values. Trolls targeted the bot and trained it to be a racist conspiracy theorist - presumably just for fun. Which essentially how trolls birth other trolls in their online echo chamber.


Things haven’t improved over time. Let’s try an experiment together right now. Perform a Google image search for unprofessional hair. What do you see? I did that just now, and saw pictures of mostly black women, which infers that most women with unprofessional hair are black. Which is racist. Was the algorithm that presented the results racist?


Enough people labelled pictures of black women with “unprofessional hair” that Google’s algorithm made the correlation and applied that inference to all the photos that it came across. This news story first broke in 2016. Before then, you only saw pictures of black women. Now, you see news stories interspersed with pictures of black women. Which shows that Google’s algorithm can’t distinguish between the two types of results.


It’s worth repeating: the algorithm cannot differentiate between non-discriminatory stories about discrimination, and results that infer discriminatory conclusions. This is because digital footprints never go away. Digital history is only additive. But at least whilst the search algorithm reductively simplifies and generalises, it doesn’t confer moral value, as both types of results are shown together.


It occurs to me that this is no different to people simplifying and generalising. But people normally understand that individual interactions are nuanced and to be judged on their own merits. Algorithms inherently do not. And it’s so much worse when algorithms enable ill-judged conclusions because they’re so impactful when they get it wrong. Algorithms now control all the complex transactions in life:


  • Presenting options for what watch next in YouTube;

  • Adjusting your insurance premium based on how hard you brake and accelerate;

  • Sequencing traffic lights in city centres;

  • Filtering your CV for keywords, to see if you’re a good candidate to interview;

  • Then assessing you might fit into company in your first video interview by measuring how you fidget;

  • Analysing credit card transactions to spot fraudulent activity;

  • Predicting crime hot spots based on the wealth of neighbourhoods;

  • Spotting the faces of terrorists flagged on watch lists on public transport CCTV.


Correlation is not causality

The core of the problem is that algorithms present correlation, and we interpret them as causality. Which at best is spurious, and at worst is bigoted. Algorithms aren’t inherently problematic, but can become so because people are, and the algorithms learn from people. The results aren’t inherently problematic, but do infer problematic conclusions, if we don’t understand their limitations. This train of logic is how we end up with discriminatory health insurance pricing.


Algorithms can be inspected, whereas humans cannot. But let’s not mistake this transparency for understanding. Even if the algorithm itself is clear and concise, the data sets are often complex, which makes outcomes unpredictable and not understandable. Then because we don’t understand them, yet believe that some other smart person could if they wanted, we over-trust them. There’s no civic demand to examine them, and civic acceptance of the conclusions. Which is how we end up with law enforcement resourcing algorithms over-policing poor neighbourhoods, and becoming part of the problem. Funding according to the algorithm targets the correlation, but not the cause.


So we have seen how spurious correlation and inherently biased data sets are major weaknesses of algorithms. The third major problem is that algorithms use past data to make predictions on likelihoods of decisions. As every investor knows, past performance does not necessarily predict future results. Decisions based on likelihoods are very bad at figuring out what to do in edge cases. When you apply those algorithms to millions of decisions, the number of bad decisions at the edge mount up. And each of those bad decisions changes someone’s life. Each edge case matters to someone.


It’s beneficial to allow people to game algorithms? For instance, part of the duty of hospital administrators is to work the NHS appointments system to enable patients to reorganise treatments for the convenience of the patient. Human intervention is needed because the appointments system optimises hospital resources.


Last Monday, TikTok and K-pop fans claimed responsibility for the lack of supporters at Donald Trump's campaign rally. They understood how the TikTok algorithms boost videos in order to promote them to like-minded activists, and deleted their posts after a day or two to avoid the plan leaking to Trump’s team.


The alternative to not understanding these algorithms is no longer feasible. The world has become too data rich to be navigated without help from artificial intelligence. We can either manipulate them to our benefit, or they will manipulate us to theirs. So what can we do about it?


Legislation and activism

It’s not illegal for a business to prioritise its resources to serve customers that are willing to pay the most. For example, it’s not illegal to prioritise call centre agent pick-up times based upon whether your number matches a list of high-value customers, even if it means you don’t answer calls from low-value customers at all. But it is discriminatory. 


In the EU GDPR legislation gives citizens the right to explanation of automated decision making. On request, companies are obliged to explain how sharing data links to decisions made about customers, and the impact of that decision. This lays important legislative foundation, as it forces companies to understand how data links to decisions, which many do not. However, this legislation falls short of protecting against bad algorithms, data and decisions. 


As we’ve seen, the combination of complex logic, inherently biased data sets and prescriptive application, make algorithms overly blunt instruments. This is now recognised by leading tech, perhaps more so than governments. Amazon notes that technology like Amazon Rekognition should only be used to narrow the field of potential matches, but because legislation still doesn’t understand this, it is implementing a one-year moratorium on police use. Amazon recognises that asking individuals to safeguard themselves against mis-application of its algorithms is unfair because it’s just too complex for end consumers.


This is exactly where we need legislation to protect us. My hope is that parliaments will enact well-considered limits on use of algorithms in industry and government, focusing on public safety and law enforcement. We should then use these limits to hold companies and agencies who do not safeguard their algorithms to account.


Legislation is not the only only tool that we have. As I’ve noted early, I do agree that inspecting algorithms and data sets is out of the reach of all but data scientists, so useless to most agencies. However, specific testing for discriminatory outcomes isn’t. So another important tool is to empower trading standards bodies to test for algorithm bias, the same way that they test for food hygiene.


Finally, there are things that end consumers can do to train the algorithms. We can increase our social connections, because online segregation is as destructive as physical segregation. By exposing ourselves to more opinions, algorithms are exposed to more diversity and present less echo chamber click bait. 


We shouldn’t engage with the click bait, the posts that elicit strong emotion. Imagine if the whole world scrolled past the trolls - then the oxygen would be removed, and the trolls would wither away, because the algorithms would see that bilious content is not clickworthy.


Fundamentally, we should play nice so that algorithms don't make bigots of us all, otherwise we only have ourselves to blame.