Data Science at Slush 2016

Slush, Europe’s leading startup event, took place in Helsinki from November 30th to December 1st. Thousands of attendees including startups, investors, tech companies, and researchers came together to get a glimpse of the latest developments in a massive range of fields.

Futurice was there in force: our Futucafe went on tour, providing good coffee and a meeting space for attendees, and we launched the Chilicorn Fund as a way to make the world a slightly better place. Our data science team went along too, and I’m going to talk about some of the trends we saw there.

Attendees wait in line at the Futucafe.
Good coffee is important to make it through Slush.

Machine learning, artificial intelligence, analytics, data science: these terms were on the lips of companies and speakers throughout Slush. Industry giants and fledgling startups alike were discussing the potential of data-driven methods to revolutionise all sorts of domains.

Open AI

IBM had a large presence at the conference, mostly centered around their Watson APIs and Bluemix cloud services - these tools provide machine learning services that developers can integrate in their applications.

This idea of machine learning as a service was also reflected right down to the startup level, with Finnish company Valohai aiming to be to machine learning what Github is to Git: a provider independent set of tools for rapidly deploying machine learning.

This increase in the availability of machine learning tools is part of a wider trend: the democratization of AI. Both Google and Microsoft gave talks on this subject at Slush. There is a growing set of open tools for knowledge sharing in AI research. arXiv sees new research papers on deep learning and other AI topics released daily, while GitHub allows researchers and companies to share their code freely. Twitter hosts public discussions among the community and allows cool work to gain awareness. As one speaker put it, "ideas are having sex with each other as never before."

Microsoft presentation on democratizing AI.
Microsoft see a bright future for AI

Despite this, there is a big area where openness and transparency can’t be found: the data. Google and other companies are open-sourcing their machine learning toolkits, but without their access to the data it is impossible for others to train models with the same accuracy. This makes it hard for startups and SMEs to compete with the big data science players - Google, Facebook, Amazon and so on - at a general level. Rather, they have to find a narrow problem and solve it well to stand out.


There were a few application areas where data science methods came up over and over. Here’s a look at some of the most interesting.

Chatbots and Personal Assistants

Understanding human language is one of the oldest and most important challenges in artificial intelligence research. Solving this challenge could be the difference between a future of intelligent systems that we can interact with in familiar ways, and one of faceless algorithms controlling the world around us. This year, there were plenty of companies at Slush working on this problem.

Chatbots were a major data science buzzword in 2016. Amazon have Alexa, Apple have Siri, Microsoft have Cortana, and Google have Assistant. IBM now have the Watson Conversation API to let people develop their own chatbots for specific contexts.

Smaller companies are developing chatbots too. Finnish startup Jenny aims to improve customer support by building a bot that learns from conversation logs how to respond to common queries. Search company Algolia talked about how many chatbot use cases are just conversational skins on search problems.

Chatbots aren’t the only way to leverage language understanding. Iris is building an AI that reads scientific papers, extracts important concepts, and maps out related work. Teqmine is using machine learning to index patents, so that people can find similar work before investing in a new idea or product. Such intelligent assistants have huge potential to transform the way we work and increase productivity.


Health is another area where machine learning will have a big future impact. One presenter noted that there are over 8000 medical research papers published every day! No human doctor can keep up with that pace, so we need systems that can analyse and aggregate this huge knowledge base to help doctors decide on the best courses of action. To that end, IBM is partnering with Tekes to create a center of excellence for healthcare in Helsinki, where their Watson machine learning tools will be used to tackle big data problems in health.

Wearables and quantified-self companies were also common. Bioasthma is a wearable for monitoring asthma patients and automatically detecting attacks. MedicSen provide a tracking app for diabetic patients, and gives automatic treatment recommendations personalized to the patient. Both services can send data to doctors automatically, making monitoring and treatment more continuous. One of the finalists in the Skolar science pitching competition was also working on health wearables from the academic side.

Sridhar Iyengar of Elemental Machines gave an interesting talk on the IoT for scientific labs. He gave a number of examples where ubiquitous sensors and machine learning can improve success rates of experiments by detecting or preventing human error. For example, Vium is a startup which aims to automate drug trials on mice. Continuous monitoring and automatic analysis is already improving the efficiency and success rate of such experiments.


Financial transactions are a huge source of data, and many low level transactions are already priced and carried out by algorithms. At Slush, we saw several companies working with financial data. Nordigen tries to automatically categorise transactions to make more accurate credit decisions. Optiacs aims to forecast stock growth and help customers manage their investments. With the blockchain and other financial technology on the rise, I expect that this will be a growth area in data science going forward.

The Slush 100

One of the highlights of Slush each year is the Slush 100 pitching contest, where 100 startups compete to win a €500,000 investment. This year, many of the top 20 competitors were using machine learning in their business. For example: Supermetrics is a tool for aggregating web analytics data from many sources and producing reports and visualisations; Valossa is creating an AI that can understand and automatically tag the contents of videos; and overall winner CybelAngel analyses the web and IoT to find leaked data or phishing sites imitating their clients. It was great to see innovative use cases for machine learning, and I expect we'll continue to see more startups follow this trend in the next few years.

Where Next?

I’ve only scratched the surface of the data-driven companies at Slush 2016. Other companies were doing great work in robotics, automatic pricing, consumer analysis, and many more domains. In the future, I expect that the data science theme will continue to grow, both at Slush and in the wider world. The data explosion is still beginning: more and better tools will be needed to make sense of the data we generate in the years to come.

IBM speaker predicts 'data explosion' by 2020
IBM forecast exponential data growth.

To close, I’ll talk about one of the final conference talks, entitled ‘AI in 2016: The Real Deal’. The panelists observed that AI and machine learning have replaced big data as the buzzwords of choice. Many companies claim to be using these techniques but actually only sprinkle them on more traditional systems. Despite this, the panel concluded that the resources are there to do great things, and that some companies are already getting meaningful results. I definitely agree, and hopefully this post has convinced you of the same.

One speaker I saw said “Fact: machine learning is going to be a big part of all businesses in the future.” Are you ready for that future? If not, the Futurice data science team can help. Get in touch!