Daniel Kagan is a senior algorithm developer for Precognize’s SAM GUARD®. He shared some of his experiences working to improve SAM GUARD®’s algorithm and enhance its performance.

What is involved in your role as senior algorithm developer?

As Precognize’s senior algorithm developer, my main responsibility is to resolve problems that arise in the SAM GUARD® algorithm, and find ways to improve it. 

Our algorithm picks up on anomalies in the client’s plant and issues an alert, along with hints to help a process engineer at the client or in our Analytical Monitoring Service (AMS) to transform that anomaly into actionable advice. Most clients work with Precognize’s own AMS, but some have their own internal AMS team. The AMS team – either our own team or the customer’s AMS team – will let me know if there’s an issue that’s related to the algorithm, and then I’ll investigate it and fix it. 

I collaborate closely with the Precognize AMS team to come up with ways to improve the algorithm. This helps things run more smoothly, makes the product better and lets us deliver better insights. 

What does a typical day look like for you?

At any given time, I’m working on both short-term projects, which usually means solving something that the AMS team raised, and major long-term research projects. So on a typical day, I’ll come into work and check on the status of all my projects, and see if there are any new issues that I need to look into. There might be a strange event that could indicate a problem with the algorithm, or a report from the AMS. If such issues are present, most of the day will probably be focused on them. If not, I will spend the day advancing my long-term projects.

What is your professional background?

My background is in academia. I completed a PhD in astrophysics in 2013, and then I took various post-doc positions in Israel until I decided to leave academia. I went to a technical training boot camp called the Israel Tech Challenge in 2020 and learned how to code properly. I already knew basic coding, but that’s where I learned to code in a more structured way so that other people can read and understand the code I wrote. 

After the boot camp, in 2021, I joined Precognize, so I’ve been working here for about two years.

What made you decide to work at Precognize?

Academia teaches you how to research, and when I interviewed at Precognize, it was clear that research is a priority here. Whether it’s how to improve things on a broader level, or trying to understand what’s going on right now with a specific issue, it all involves research. So this was a great way for me to combine my research background with something that is improving how manufacturers work every day.

What do you like best about working at Precognize?

The lovely culture in the company. Everyone is helpful, everyone works together, so I know that if I have a problem, I can go to ask the dev team for help. Like I said, I’ve been working closely  with the AMS people and that went really well. There’s a collaborative atmosphere in the company which is great to work in.

What do you like best about your work?

I love the variety. I’m always involved in things on many different levels. I usually have a big, long-term research project that’s ongoing; and I also have something I need to explain right now in the next hour; and the short-term project where I need to help someone understand what just happened in the algorithm. 

In my work, I interface with many parts of the company. At the beginning of a research project, I will often be working with the AMS team, to identify what improvements to the algorithm would have the most impact for our customers. Once I complete a new algorithm, I work with the development team to integrate it into our existing codebase, as well as ensuring that the algorithm gets the data it needs from our database. Finally, I work with the product team and front-end developers to ensure that the customer-facing application properly leverages the algorithm.

In academia, I didn’t have defined objectives, and I didn’t receive as much feedback about my progress. I also had no idea how much importance my project might have for anyone outside of my team. At Precognize, I get a lot of feedback. I know what’s important, what has to be solved, what kind of impact my work will have. I feel that I’m making progress, and that’s much better for me than academic research. 

I also very much like that we’re making things more efficient for manufacturing plants, so they’re less polluting. It’s good to feel you’re doing something good for the world.

What is most challenging about your work?

Learning how to represent research on my own was a challenge at the beginning. It was also difficult to adjust to working as part of an organization, where people might tell me not to work on something, or that we need to do something different for business reasons. In academia, no one can tell you to stop working on something or to work on this topic. 

I also had to learn how to communicate well with decision makers. Sometimes they might have unrealistic expectations for an algorithm. So I need to explain clearly what data can and can’t do, because the algorithm can’t deliver without the right data or the right quality data. But I also had to learn the business case better, because sometimes I didn’t understand exactly what they were asking from me. 

Can you tell us about a particularly important or interesting problem that you overcame?

Yes, we just finished making a big improvement to our hint navigation, and it should be released in the fall update. The hint navigation is part of what turns an alert from the algorithm that something went wrong in a certain location, into an actionable recommendation for the customer. 

The hint navigation is one of the things we use to help process engineers to understand what went wrong. It shows them different sensor data, and they send feedback to say “this helped me understand things better” or “this didn’t help me.” And then we’d need to decide what to show them next, in response to the feedback. 

I worked very closely on this with the AMS team to find out what they thought we could do to improve it. As a result, we made the algorithm for responding to feedback much simpler and faster, so we could serve better hints about what’s going on. We made it more efficient, because now it won’t show redundant information.

What impact does generative AI and ChatGPT have on your work?

ChatGPT and neural networks and generative AI are hot news in the data science world, and I am interested in using them where they provide business value.  But they are not currently the best algorithms for SAM GUARD®. 

When it comes to language and image processing, deep learning and neural networks are much better than other methods. That’s because in those use cases, there’s a massive amount of data and many possible predictive features that are hard to choose between. Neural networks are excellent at automatically figuring out which aspects of the problem (pixels and letters, for instance) are most important in these domains, which is a very difficult task for humans.
But for applications like SAM GUARD® and many other business cases where the important data can be recognized by humans, more traditional Machine Learning is usually a better tool to use. Traditional ML methods are usually more transparent, which facilitates understanding whether the model’s characteristics match the business logic of the problem we’re trying to solve. It’s very difficult to understand why a neural network model is doing something. For SAM GUARD®, explaining anomalies is an essential part of the product, so neural networks are not a good fit.

What do you see in the future for SAM GUARD®?

The most exciting development is that we’re moving into optimization. Until now, we’ve mostly focused on preventing problems, understanding them, and resolving them faster. But even when there’s nothing wrong, there can still be something you can improve. So we’re looking at using data to understand the plant better and make recommendations for optimization. 

We’re also improving analytics for SAM GUARD®, which will give us a much better understanding of what’s going on in the plant, and make us more agile in resolving the kind of problems I work with. 

We might also start adding knowledge about specific parts to our generic model. SAM GUARD® is brand agnostic, which is why it can work in every factory. But we’re looking to add some information about the characteristics of specific parts, like valves for example, to improve the product. If we can integrate this knowledge, we’ll be able to improve our modeling and error detection for those specific parts.