June 14, 2017
networked insights
Photography by Jay Hagstrom. Interviews have been edited for length and clarity.

 

We recently met with engineers at Chicago’s Networked Insights, who told us how they turn enormous quantities of data into actionable insights for clients. The conversation starts with the company's taxonomy for classifying the social media universe and ends with its engineering team's commitment to learning from failures.

NETWORKED INSIGHTS AT A GLANCE

FOUNDED: 2006

WHAT THEY DO: Networked Insights uses artificial intelligence to analyze more than half a billion social media posts daily. They use that data to help brands write more relevant content, reach new audiences and understand what customers are saying about them.

WHERE THEY DO IT: Chicago

CONFERENCE ROOM THEMES: Tropical destinations. (See you in Costa Rica!)

TECHNOLOGIES: Apache Storm, Kafka and Spark, Elasticsearch, Postgres, JVM and React.

IDEAL CANDIDATE: A well-rounded person who’s willing to go try something new, even if it might lead to spectacular mistakes.

GAME OF CHOICE: Shuffleboard.

11 YEARS OLD: On the day we visited.

 

 

What does Networked Insights do?

Brad Lehner, director of engineering: We make software that helps marketing professionals understand the online conversations around their products, services, companies and industries, as well as who the people participating in those conversations are. Those insights in turn help them develop content and target people.

Networked Insights used to be primarily a services company with proprietary technology. Now you are a company that makes software for marketers. How did you simplify your product for a broader audience?

Brad Burke, chief technology officer: In the beginning, our Kairos AI platform was sort of just a blank slate and you had to know how to ask it questions or use our analysts, which took a lot of time. Over time, through working with customers and through our own research, we came up with a taxonomy that classified the social universe. That taxonomy really builds the conversation. Otherwise, what we have is just an enormous lake of data. 

People love talking about data lakes these days, but you only want a data lake until you get one. Then you realize you’re drowning. 

What's the biggest challenge in turning oceans of data into something that's usable?

Burke: Every feed of data is different, so you have to understand their behaviors. Then, to model our concepts, we use all sorts of different types of technologies, from trained models to linguistic technologies or keywords. You have also have to maintain a high level of accuracy while at the same time keeping up with huge volumes of data. 

Sebastian Seith, software engineer: We get a lot of different data types, but we also receive data in different ways. Some data sources can go offline for a while and come back with a giant batch of data, for instance, so you have to be able to catch up with that without letting the other content get backed up behind it. 

 

 

How do you build a tech platform that can handle that variability?

Burke: We’ve thought a lot about how we can get as much of the data as possible into a standardized format right at the very beginning without losing any of the information. Then you have to find the technologies that can handle just absolutely enormous data volumes with very high degrees of processing.

I think the biggest innovation for us has been using the cloud properly in a hybrid format. We can process things very cheaply in our own data center, but we have a hybrid setup with a cloud partner for when we need extra horsepower.

The question of cost is kind of unique to the industry you’re in, since you depend on so much data processing. How does the cost of running your servers factor into how your engineering team works?

Burke: DevOps used to be the only people who cared about the budget, but now it’s really become a joint effort. It's interesting to see engineers talk about that stuff, because you don't usually see them that engaged in the conversation. 

Seith: We've learned quite a bit about the economies of what's really feasible for doing things at scale. In some cases, the cloud works great and in other cases it doesn't. And I don't think that's necessarily very apparent based on current industry standards and trends. Everyone's advocating for the adoption of cloud technologies, but the economies aren't necessarily there for everything. 

 

 

Right. And there’s also the question of which analytics are worth doing, and which analytics are going to end up costing you more than you can earn from the insights.

Burke: You really do have to get into product profitability, because there’s two dynamics going on: the question of what to do in the cloud and the question of return on investment. Just because you can do something with data, that doesn't necessarily mean the answer is worth anything. And if you’re doing your analytics in the cloud, you're paying for everything.

What kind of technologies is your platform built on?

Lehner: We have a pretty extensive stack because our platform is really composed of two separate core systems. One system is based on the way that we ingest, process and classify data — and also how we put it into storage systems that allow us to do further data enhancements or transformations or expose it for search. 

Our stack for that includes things like Apache Storm, Kafka, Spark and Elasticsearch for real-time search. We started with a lot of the open source big data solutions early, found the ones that worked for us and optimized them for our needs.

Then there’s a set of client-facing applications and back-end services built on more traditional Java-based technologies like JVM, relational data stores like Postgres and front-end technologies like React.

If you had to name one defining feature of your engineering team’s culture, what would it be?

Seith: We have a very flat structure. Everyone's encouraged to bring new ideas to the table.

Burke: For instance, every week, we host “Shark Tank,” where anyone can come in with an idea. Our newest product, Audience AI, was one of those ideas. We always carve out some time for assigning engineers who are currently available to these kind of projects. With Audience AI, we went from prototype to customers in about two and a half months.

Seith: We're also not worried about appearances. It's all about whether you can do what needs to be done, not whether you're working from home or what times you’re working. A lot of companies say stuff like that, but I don't think they truly understand it.

 

 

So you won’t tell people they can work whenever, then ask where they were at 9 a.m. when you walked by their desk?

Burke: I started my career as a consultant, which was very much about how many hours you’re charging for and whether you’re visible until 9 p.m. every night. But it’s really important for me to have rounded employees, and we want people who go home and have dinner with their kids at night. I get way more productivity out of happy people who have full lives.

What do you look for in a candidate, aside from their technical skills?

Burke: I always ask engineers about their failures. Being an entrepreneur means you're willing to take risks. I'm not looking for cowboys, but I'm looking for people who are willing to try something new and who are okay with accepting failure and learning from it. I also look for people who are collaborative.

Have you made any spectacular mistakes while here?

Seith: I built out a system that the analytics team requested so they could go back and get historical data for new queries. In theory, this was going to be a big improvement over our old system, which had a lot of potential for crashing everything.

When we first turned it on, we hadn’t put all of our safety checks in place. So within about 10 minutes, somebody ran a query that was way too large, and that spanned way too much data, which put our servers under a lot of stress. Because the program became unresponsive, the user proceeded to try the query again about five times. We spent the better part of that day and a good chunk of the weekend to get everything back online. And I think a couple of the machines in the cluster threw a disk because of the crash.

What was it about Network Insights that drew you in?

Seith: I was looking for something with a fast-moving and innovative culture, which is what I had at the beginning of my career. This was the smallest company I ever really considered and they were just doing amazing things. We're working on machine learning and artificial intelligence solutions that I hadn't really seen people doing at a high level. We focus on exploring these things to their fullest and not letting too many ideas of how something is going to  turn out get in the way of exploring.