Friday 25 December 2020

This Pandemic is an Existential Crisis for Cinema

With not enough cinemas globally open to show the tentpole content, studios are electing to release on their own streaming services. This is precipitating an existential crisis for cinema. [blog.mindrocketnow.com]


During Lockdown i, you might remember that Trolls World Tour made a splash by bypassing theatres and being released on streaming services only. I remember because spending on ads on the side of buses was pretty much frozen, so pictures of those little blighters lingered for a long time. AMC took particular umbrage (possibly not at the bus posters, but at being disintermediated) and black-listed NBCUniversal. 


But Trolls World tour made $95M in 2 weeks, which compares favourably with its $90M production costs. It compares very favourably if you think that NBCU didn’t have to share any of its revenue with the theatre groups (hence AMC’s outrage). Predictably, this strategy has been repeated with other high profile movies.


Disney elected to stream Mulan on Disney+ for £20 premier access - not PPV but a one-off fee to access a premier subscription tier comprising of a single movie. It became Disney’s lowest grossing of its live action remakes to date, and won't recoup its $200M budget (yes, there were mitigating factors in the freezing of China-US relations which made this movie a cultural casualty). 


Christopher Nolan’s Tenet wasn’t released online, only in theatres, made £347M in the box office (against $205M cost). However, that box office will be shared with the theatres so WarnerMedia didn’t recoup its investment; it was also Christopher Nolan’s worst-performing movie to date. 


Mulan will be judged a success and Tenet a failure. The reason is that Mulan drove increased takeup of Disney+ (Disney won’t confirm how many people took up a Disney+ subscription and paid the premium just to watch Mulan). So the cost of the movie can be somewhat offset by a reduced cost of acquisition of the new Disney+ subscribers.


So it’s not surprising to me Wonder Woman 1984 will be released in cinemas and online on HBO Max today (Christmas Day 2020). Not only will WarnerMedia not have to share revenue, but it’ll drive more people to its OTT service (which is fourth in a field of three so really needs to catch up). What is surprising is that WarnerMedia is not intending to charge a premium for it. 


It’s conventional wisdom that even in a recession TV survives household budget cuts. TV subscription is a recurring expense, and going to the cinema is an infrequent treat. Pricing PPV as the latter rather than the former is problematic. Charging a content premium when your market is suffering from increasing unemployment is a hard sell. When the Premier League tried to charge £15 per match, fans revolted. The best riposte to this price gouging was from Newcastle United fans. Instead of paying the broadcaster, they donated the same money to a local food bank, raising over £20k. It’s not necessarily the amount, but the context.


At £20 per view, each movie will have to cost the same as Trolls to break even. The pricing looks much more compelling at £15 with a complementary digital download later, or £20 one-off upgrade to a premium subscription, as long as there’s more than just the one blockbuster. If there isn’t an upgrade fee, if the new title is adding to the value proposition for the service, then the decision is a no-brainer. 


If you’re Disney or WarnerMedia or Amazon or Netflix, the pandemic has served to increase your addressable market for your OTT service. However, it seems clear that this pandemic has hastened the demise of the multiplex. We’re not going back to a tentpole movie filling up every screen in a multiplex on opening night. Indeed, we haven’t had that in a decade. 

Cinema is no longer the premium venue to see premium content - that is now your own home. Instead, I can see chains like Everyman and The Prince Charles Cinema flourishing, spending less on screening rights, and spending more on creating an experience for cinema lovers. 


Merry Christmas everyone, and see you in 2021!

Friday 11 December 2020

Effective not Efficient

 How do you know if what you did today moved the needle? Delivered value to the business? And if you don’t know, why did you do them? [blog.mindrocketnow.com]


The answer to this last question normally boils down to being too busy to stop to think. Making time for reflection ends up being prioritised below real work, because there’s no immediate output. The name of the game is to produce as much output as possible, because that’s what can be measured, put into annual appraisals, paid a salary against, so we assume it’s an accurate proxy for delivering value.


There’s a logical thread from optimising output:

  • To maximise output, we need to maximise utilisation;

    • Which means we need to eliminate anything that impedes delivering output, anything that isn’t writing code;

      • But we know that eliminating planning is bad, so instead, we do all our big planning up front;

        • And we know that plans fail because of lack of contextual knowledge when the plans are made, so we need to do big design before big planning;

          • To get the design right, we need to have clarity of the product we’re intending to put into market, so we need to put big market analysis and big product analysis before big design;

            • This is a fair bit of work to do up front, with varying skill sets, probably from different teams, so we’ll need the various resource holders to agree that this is the right thing to commit their resources to = organisational alignment;

              • To secure that commitment, we’ll need to prove that we’ve thought it through, and present a business case that associates output with investment;

                • This business case is important, so we’ll need to put some thought into it, so we’ll need a little planning, little design, little analysis, little resourcing, little commitment…

                  • This is getting a bit recursive now 


Let’s look at the opposite position, where we aren’t optimising output, but focusing on creating the best output (let’s assume it’s software).

  • We’ll probably have scrum teams with all the skills in-team to create successful, well-thought-out code;

  • Because all the skills are in-team, we’ll be more nimble, responding to changes in context quickly, so the code will probably take less time to complete;

  • We’ll probably be using DevOps techniques, so the code will have security, operability, quality baked in from the start;

But:

  • We still won’t know if we’ve delivered any value, or merely great code.


Both cases are failures, because both rely on the assumption that output is a good proxy for value. It isn’t.


I once transformed a software delivery capability within a company that was overly focused on cost control, and therefore big everything up front, into one that was able to deliver quality code reliably without a big front, and so much more cheaply. Once the spend rate and code quality were no longer concerns, we were able to focus on what delivered value. And as it turned out, none of the features being mooted actually moved the market. Our investment was better made into marketing and content, rather than further app changes. So I pivoted the team to the next market, with a clear conscience knowing that done = Done.

Building software is complicated. Building the right software is not complicated, but it is harder. It requires trust that your process will yield good output, so the focus can be on understanding which is the right output.

Friday 4 December 2020

Media Tech Bites: Serving the Niche

Here’s my short take on a piece of media technology. This week I look at how the big broadcasters are ignoring niche at their peril. Do you agree? [blog.mindrocketnow.com]


It’s a curious phenomenon that whilst there’s more content being produced than ever, there’s less choice being exercised. In making a choice, we are a product of our unconscious biases. Default bias will guide us to choose the first in a long list of programmes, the pseudocertainty effect will guide us to choose the most familiar as the most risk-averse, projection bias will overestimate how much we’ll want to actually want to finish the series that we started, and the sunk cost fallacy will make us watch every episode until we get to the end. Because we just don’t want to incur the cognitive pain of choosing something new, we watch another episode of Friends.


The reason that we don’t choose something new, something eye-opening, something horizon-expanding is because content service providers are really bad at providing good options for us to choose from. Netflix is the market leader, so should be really good at this, and yet we ended up watching the latest Adam Sandler halloween movie for our family movie night. I assumed it was going to be bad, IMDB rated it to be bad, and yet we ended up watching it because it was a choice that we all settled on - perhaps it incurred the least amount of cumulative cognitive pain.


Technology should be enabling more choice. The incremental cost of distributing another VOD asset is negligible. The incremental cost of another broadcast channel is low enough that a “pop-up” broadcast over satellite is eminently economically viable. And costs will inevitably be driven down as broadcasters take advantage of economies of scale of the cloud. But the opposite is happening. 


Wouldn’t it be great if technology could take on some of the cognitive load. Rather than relying on sifting through irrelevant recommendations presented to me, wouldn’t it be better if there were some sort of artificial precognition that could be applied to content catalogues to find something to watch. The distinction between these two approaches is important.


We don’t need more big companies to suck more data about us so that they can better market their content to us, and sell our digital soul to everyone else. We need to take control of our choice, and we need tools to help us do so. I’m looking forward to the day when I can apply my emulated thought process, on technology that only I can access, to pre-sift the increasing tide of dross for me, so that I can spend my time watching the gems.

Tuesday 1 December 2020

Remembering Quibi

The brief rise and precipitous fall of Quibi perhaps shows that the world doesn’t need yet another big streaming service. What can we learn? Do you agree? [blog.mindrocketnow.com]


As I write this, the polished https://quibi.com/news site still presents a list of achievements: a new series, reflected glory from the Emmys. The news ends in September 2020, and omits the last piece of big story; that it has closed after 6 months and $1.75B. Most startups fail, especially those in crowded markets, but this one has failed more thoroughly than most.


In a crowded market, you have to differentiate, and that differentiation should address a gap in the market. Digital media services have been trying, to varying degrees of lack of success, to combine the way people want to consume content with the way people want to communicate - streaming and social media. So it made sense that Quibi this is the gap that should focus on; being a social streaming service. Quibi focused on short form and mobile because that’s how people live their social media lives.


Analysts were impressed with the “mission to entertain, inform and inspire with fresh content from today’s top talent—one quick bite at a time”, as were investors. And a lot of investment was needed. Quibi wanted to differentiate itself from user-generated content by having high artistic and production values. There’s not a lot of this premium content to acquire, so Quibi had to produce the content itself. And premium content is expensive to create. Netflix spent $17.3B in 2020 so if you want to be in the same game, you’ll have to spend billions too. 


The current market has shown that people will pay for content. US research shows that people are willing to pay at least $10 to $20 per month for streaming services. Backers saw an addressable market of the size of YouTube’s 2B monthly active users. It would only take a fraction of a percent of that market with a regular subscription to pay back that investment manyfold. YouTube itself has 20M subscribers for its Premium service. At Quibi’s $4.99 per month this makes a healthy revenue stream. The business case seems straightforward.


Using this huge investment, they went all in on premium content. Quibi covered the production costs, unlike cinema where producers take the financial risk and then recoup from box office receipts. This enabled content creators to take risks. The creators seemed to respond with ideas that were really interesting, using the 10min cap on duration to spur creativity. 


Was it a casualty of the pandemic? Well, Disney+ and HBO Max managed to launch and sustain, so no. The pandemic didn’t stop it getting mind share of its target audience through buying influencers. The pandemic didn’t stop it from being downloaded from app stores.


I think Quibi got the market wrong. They thought they were "competing against free". I think they were competing against the pause button. Once you get underneath the gloss and money, I think that they were solving the wrong problem. This is hardly uncommon in startups. Most startups realise this when they start to run out of money, so they pivot away or fail. Quibi had too much money so they continued. Perhaps the story would have ended differently if they had the same depth of catalogue as Disney or WarnerMedia, but they didn’t have enough money for that. Ironically, they had simultaneously too much money to succeed, and too little money to be successful.


In a crowded market, you have to differentiate, and that differentiation should address a need in the market. Quibi addressed a gap, not a need, and that is ultimately why it failed.

Friday 27 November 2020

Media Tech Bites: Is DVB now Irrelevant?

Here’s my short take on a piece of media technology. This week I look at a mature overripe technology, DVB. Do you agree? [blog.mindrocketnow.com]


20 years ago, when I first started out in this business, I learnt the DVB standards. This gave me a competitive advantage as a consultant; being confident with the detail down to the specification of the tables meant that I could confidently integrate broadcast systems. My last role was deeply technical as well, but the knowledge that was prized was how to use API wrappers in C++ code. Not once did I need to cast my memory back to DVB standards. For me personally, my hard-won DVB experience is now irrelevant.


DVB (and its family of standards) are still critical to broadcasters. It has fulfilled its promise of lower cost through interoperability. It’s been an objective success, and reaches millions of people daily. It’s not DVB that’s the problem, but broadcast. 


It’s becoming increasingly obvious that the future of broadcast is to be delivered over the internet. Yes, satellite has much larger footprint, yes terrestrial has fine-grained reach, but when I read the tech press, most of the arguments come from a place of wanting to continue to sweat assets that we spent so much time and effort building. None of that matters to people who consume these services. As I’ve often commented, people just want to watch TV without needing to worry about how they’re watching it.


So as broadcast becomes just another service over the internet, the skills needed to push this forward are the same as software delivery in any other industry. Just like all those pylons holding up TV transmitters around the country, my memorisation of DVB standards is now just another sunk cost.


Friday 20 November 2020

Media Tech Bites: Blockchain’s Potential

Here’s my short take on a piece of media technology. This week I look at an immature maturing technology, blockchain. Do you agree? [blog.mindrocketnow.com]


As I’m sure you know, a blockchain is a way of recording information in a way that is (almost) impossible to cheat. It’s trivial to check, no one body (like a bank) guarantees any chain, so it’s eminently suited to industries that suffer from multiple intermediaries and opaque authorities, like finance. Or broadcast?


Blockchain is unparalleled in establishing trust across domains of ownership. Broadcast is all about passing assets across domains of ownership. If each media asset was accompanied by an immutable history, we’d know who contributed, how much, and therefore where royalties should be shared. We could see a technical history of all the corrections and transcoding over the workflow, and maybe change TV settings to compensate.


But it strikes me that blockchain isn’t the answer, because broadcast has already optimised the secured distribution of content. I don’t mean that content security is perfect because it’s far from. It’s optimised because the leakage of revenue isn’t significant enough to merit ripping out and replacing all the technology, legal and consumer investments that we’ve made.


So for broadcast, and for the time being, blockchain is still a solution looking for a problem.


Thursday 12 November 2020

PyBloom coding project: Conclusions

Conclusions

This project has been quite a learning experience. I’ve had to get to grips with a lot of technologies, a lot of frameworks, and quite a learning curve. Here are some top tips:


  • No matter how clear the tutorials, the code never works first time. Learn by testing.

  • Google and YouTube are great resources for finding how to do things, but relies on being able to ask the right question. Be clear on precisely what the problem is.

  • Having to describe everything here has really helped reinforce the learnings. 

  • There’s always another feature to think up and implement. But it’s important to be clear when done=done, and the program is good enough to be used.

  • Good enough to be used by you isn’t the same as good enough for someone else. If you want to roll out the program, have it tested by someone that doesn’t know it.

Even better if…

Over the course of this document, I’ve described how I’ve implemented the features of my program. It’s doing what I intended it to do, the lights are showing how the evenings are getting colder. It’s fine for personal consumption, but it wouldn’t be fine for others to use. Before going into new features, I should look at operationalising the code, which will be a whole different set of challenges:


  • The program should be available (to an acceptable service level), which means it should be deployed onto a hosted site. 

  • I’ve done precious little formal testing. At least, operationalised code should have a testing approach consisting of: test data, test cases, expected results. At best, these tests should be completed automatically, as the code is promoted from dev to prod. This is the basis of continuous deployment.

  • Operational tools: manual CRUD access to the databases - because I built in a way of manually adding data, but didn’t build a way to remove it.


--

Thank you for joining me on this journey. Hope it’s been a little help with your own exploration. Also visit https://github.com/Schmoiger/pybloom for the full story.

Wednesday 11 November 2020

The Problem with Data Driven Decisions

This week I look at the pitfalls of being data-driven. It seems simple, and logical, so why do we end up falling back on our gut feel? [blog.mindrocketnow.com]


Should I make my next investment decision based on objective data or taking a punt based on how I feel? Seems a straightforward answer, doesn’t it - who would want to be responsible for a significant financial decision that was made on a whim? (Kinda depends if it worked out…) But making a good data-driven decision is a lot harder than just intending to do so.


What are you measuring? This simple question unravels a chain of questions that turn out to each require careful thought. Let’s pretend we’re trying to figure out whether to spend money on a cool new app feature:

  • What are the business (or strategic) outcomes you’re trying to influence?

  • How does the app contribute to that outcome?

  • How does the feature improve the ability of the app to contribute to that outcome?

  • What metric can we put on that improvement?

  • How can we measure those metrics? 

  • Are those measurements allowed in our privacy guidelines?

  • What would be the impact on those quantities if we didn’t build the app?

  • Is there anything else we could do, at less cost, that would improve that metric, even in part?

  • Is there anything else we could do, at same cost, that would improve an alternative, more important metric?


Most of the time, we don’t have a good answer to all of those questions, oftentimes because to do so thoroughly will take a lot of effort, more than the effort to ship the feature itself. So we end up making a best guess - aka taking a punt.


I’m really attracted to the concept of lean development, which codified learning by doing. The idea is to ship features as frequently as possible, measure impact upon metrics, and improve or discard depending on whether it’s a positive or negative impact. By keeping this feedback loop really short, we risk less wasted development effort. As a side-effect, we maintain focus on the metrics that matter to us, and maintain a cadence of shipping features.


As before, it depends upon the metric. If we optimise to a vanity metric (one that doesn’t align with a business objective) or a proxy metric (one that doesn’t align well with a business objective), then we might miss the business objective. We might optimise for a very fast playback start but miss the fact that consumers are much more worried about not finding programmes that they like.


After all that, you might be thinking that I advise avoiding making gut feel decisions. You’d be right - gut feel is basically your thought heuristics kicking in, reinforcing all your unconscious biases. Taking a moment to consider rather than react is a good rule. But that doesn’t mean you shouldn’t use your feelings. I’m also a strong believer in eating your own cooking. 


We should be building apps for people, and we need to understand people in detail if the app is going to make a difference to them. The person you get to see the most is yourself, so you should be able to analyse your own reactions to your app in the best detail.


Be data driven but guided by context. The best context is you.


Tuesday 10 November 2020

Media Tech Bites: Importance of Metadata

Here’s my short take on a piece of media technology. This week I look at why metadata is the most important part of media tech. Do you agree? [blog.mindrocketnow.com]


Broadcast is in the middle of a technology revolution. 20 years ago, I was cabling routers to encoders, making sure that the labels were correct. Now I’m working with engineers to ensure the virtual routing through our AWS services is working as it should, and feeding iOS and Android apps properly.


Not everything is changing. Then and now, we use metadata to describe the services being delivered. In the world of DVB then, the metadata was in the main the System Information tables that told the Set Top Boxes where to find the broadcast services in the frequency spectrum. Now, as then, the metadata drives content search and discovery. However now, unlike then, metadata drives a far richer discovery experience; we can see trailers where we used to see thumbnails, we can link to IMDB articles where before we had a character-limited synopsis, and we can select the next programme based on what we’ve just watched or even what mood we’re in.


More fundamentally, metadata is starting to drive how the services are created. The cabling and labels are abstracted from the creation of services by software drivers. Middleware orchestrates those drivers into services. Those services can be orchestrated together in very visual “low code” ways. They can be defined in terms of metadata alone. This means UI and UX designers can put together interesting app experiences without needing to know Kotlin or Swift. It will mean that experiences can be put together programmatically, such as a UI that is reactive to the content that you’re watching, or the context that you’re watching it in.


If this all seems familiar, it’s because the enterprise IT sector went through this change last decade. It’s experience is that metadata drives the creation of the service, the consumption of the service, and the quality assurance of the service. To stay relevant, it’ll be necessary for all of us to be metadata-literate.


Monday 9 November 2020

Pybloom coding project part 13: Implementing version control

 

Implementing version control

Once you start, you can’t stop tinkering. At some point, something is going to break. So as I intend to keep this code going for a while, I implemented version control with Git. This is a feature-rich version control system, and takes care of the steps in promoting the code you’re tinkering with to code that’s ready to publish. Finally, it also integrates nicely with the GitHub service on the web where you can share your code, and pushing to remote repositories such as on my Raspberry Pi. No more cutting and pasting and associated typos.


Once completed, the setup will look like this:


Atom [UI for] -> Git on Mac <-> GitHub [also for public sharing] -> Git on RPi

Spikes

Figuring this out took a lot of searching, as I didn’t find the documentation particularly enlightening. Here’s recommended reading.


Setting up Git, GitHub and Atom

Git was already installed in my Mac, and is part of Atom by default. So no need to install anything more. But there is a lot of other setting up to be done. 


Before making the first Git push, I set up the files to ignore by adding the following items to my .gitignore file. These are: the environment files that are set up by the system; or data files that are created at run time; or personal information that I don’t want you to see! Note the **/ syntax which forces the file to be ignored from all subfolders.


  • __pycache__* : used by Python runtime 

  • Icon*, **/*.DS_Store : hidden files used by macOS

  • .git : hidden files used by Git

  • *_old : a useful way of hiding files you might be tinkering way 

  • credentials.py : secrets needed for the API

  • **/*_bar.svg, **/*_pie.svg : these graphs are made from the data at run time

  • database.sqlite3 : created by the main code on first connection and updated at run time


Next up is to create the target repository on GitHub. This is done by logging into the GitHub dashboard, and creating a new repository from there. Mine is called https://github.com/Schmoiger/pybloom and I encourage you to go have a look there.


Git (the local repository) needs to know this GitHub (remote repository) URL, so the next step is to add it to the Git config. This is done from Terminal, using the following command:


git remote add origin https://github.com/<your username>/<your app name>


This command associates the name origin to the remote repository URL, which makes management within Git and Atom a little easier. (If you’re so inclined, you could instead clone my repo instead, and work on my code. We’ll do this in the next section on setting up the Raspberry Pi) Now we’ve told Git where your remote repository is, we’ve got to tell who you are, so that Git can tell GitHub. In other words, we have to set up your email address in Git to be accepted by GitHub as an authenticated user.


git config --global user.email "email@example.com"


The email address is the one set up in GitHub. It doesn’t have to be a real address; GitHub will set up a “noreply” email for you if you wish.


We don’t configure the password in the same way. Instead, if you now boot Atom you should see a login window in the GitHub pane. This asks for the login token, which you’ll need to get from https://github.atom.io/login, then paste into Atom.

Setting up the Raspberry Pi

Git comes pre-installed in Raspberry Pi OS, so no further installation necessary. But as with the Mac, there is configuration to be done.


git config --global user.name "<your username>"

git config --global user.email "<your email>"


First step is to tell the git instance on the RPi who you are. By now everyone has hundreds of username/ email combinations. Rather than creating another one, I’m re-using my GitHub identity.


git clone https://github.com/Schmoiger/pybloom.git Projects


If you’re cloning my repo into your Projects folder, use the statement above, and a copy of the repo will be cloned into a pybloom subfolder. Otherwise change the url and the destination to suit your own environment.


The final step is to copy across the credentials.py file into the pybloom folder, as it has all your secrets for the API login.

Documentation

Arguably, the most important part of the repository is the documentation. I’ve created three:


  • README.md - bite-sized summary of the key things you need to know to use the program

  • Blog post - this set of posts (not in GitHub, but on https://blog.mindrocketnow.com)

  • PyBloom_manual.html - all the post put together into a single document, for convenience

Putting it together

.gitignore

__pycache__*

Icon*

.git

**/*.DS_Store

**/*_bar.svg

**/*_pie.svg

*_old

credentials.py

database.sqlite3


Workflow for making changes

  1. Make changes in dev environment, using Atom

  2. Commit in Git (after testing), then commit to GitHub, from within Atom

  3. SSH into the RPi, type workon pybloom to work in the virtual environment

  4. Pull the code changes by typing git pull

  5. Re-start the web server by typing ctrl+C then flask run --host=0.0.0.0

Friday 6 November 2020

PyBloom coding project part 12: A little JavaScript magic

Here's an extra little something for PyBloom. In part 12 I look at how to transfer data from the Python code into the CSS of the website - not as hard as it first seems.

Colour picker utility

My stretch target was to display a table of all the colours of the Bloom lamp on a page in the website. As we've seen, the colours are in a persistent SQLite table, so all I have to do is to extract from the table, and somehow get the CSS to read the colour information. Except CSS is not an interactive language, so I needed another technology. Step forward JavaScript.

The technologies

  • HTML

  • JavaScript

  • Jinja2

The code

{% extends "base.html" %}


We put all the main styling into the base page, which means as before, we simply need to extend it here.


{% block app_content %}


This HTML section will be inserted into the base.html template at the position marked app_content.


<table class="table">


We’re making use of the Bootstrap formatting for tables to make things pretty. 


<tr>

  <th scope="col">Temperature</th>

  <th scope="col">Colour</th>

</tr>


The table consists of two columns: one for Temperature and one for the corresponding Bloom colour.


{% for row in rows %}


This is our familiar Jinja2 loop, which we want to repeat for every row in the temperature conversion lookup table.


<tr>

  <td>{{row[1]}}</td>


The first cell in the row is simply the value of temperature from the lookup table.


  <td id="{{row[2]}}">{{row[2]}}</td>

</tr>


The second cell is the value of the colour, a hex string. This same string is also used to identify the cell. Each cell will have its own background colour, so needs to be uniquely identified. We’ll implement this logic with a small bit of JavaScript, so let’s jump straight into it.


{% block app_js %}


As with the inserted HTML, this identifies where the custom JavaScript goes. Order is important in placing JavaScript; as this overrides the Bootstrap JavaScript, the block is after the statement that pulls Bootstrap from the CDN.


<script>

  "use strict";


This is the normal preamble for JavaScript. Unlike CSS, as of HTML5 there’s no need to define type="text/javascript" as no other types are allowed. The second "use strict" command instructs the browser to use the modern (ECMAScript5 from 2009) interpreter. This is important as some commands will not work in an older interpreter, and some commands will work differently. 


{% for row in rows %}

  document.getElementById("{{row[2]}}").style.background = "#"+"{{row[2]}}";


We can use the same Jinja2 loop structure to set the background for each hex value cell in our table. But because each iteration of the loop creates another persistent row in the JavaScript, we need to be careful not to use variables as these would simply overwrite the previous. That’s why we have this complicated command that chains a lot of functions. Let’s pick it apart.

  • document.getElementById(): We assigned a unique ID for each table cell containing a hex value

  • "{{row[2]}}": This unique ID is the hex value string

  • style.background: JavaScript accesses the cell colour using this method (which is slightly different to the CSS attribute of background-color)

  • = "#"+"{{row[2]}}": The colour is the hex value from the lookup table, which is a string (as accepted by the SQLite database), but has to be prefixed with a hash to identify it as a hex string

  • ; Don’t forget to finish every JavaScript statement with a semicolon - which isn’t required in Python or CSS or SQLite


The table now pulls the Hue bloom colour data from the external SQLite table and displays on a web page.

Putting it together

{% extends "base.html" %}



{% block app_content %}

  <h1>Colour key</h1>

  <div>

    <table class="table">

      <tr>

        <th scope="col">Temperature</th>

        <th scope="col">Colour</th>

      </tr>

      {% for row in rows %}

      <tr>

        <td>{{row[1]}}</td>

        <td id="{{row[2]}}">{{row[2]}}</td>

      </tr>

      {% endfor %}

    </table>

  </div>

{% endblock %}



{% block app_js %}

  <script>

    "use strict";

    {% for row in rows %}

      document.getElementById("{{row[2]}}").style.background = "#"+"{{row[2]}}";

    {% endfor %}

  </script>

{% endblock %}



This colour picker page combines a lot of technologies to achieve something that seemed quite simple, but turned out to require a bit of thought. And that's the theme for this entire project! In the next section, I'll look back at the project and draw out some conclusions. Also visit https://github.com/Schmoiger/pybloom for the full story.