MLOps Blog

Track and Visualize Information From Your Pipelines: neptune.ai + ZenML Integration

When building ML models, you spend a lot of time experimenting. Already with one model in the pipeline, you may try out hundreds of parameters and produce tons of metadata about your runs. And the more models you develop (and later deploy), the more stuff is there to store, track, compare, organize, and share with others. 

neptune.ai does exactly that. It’s an experiment tracker and model registry that helps you have better control over your experiments and models. You log all the metadata into this one source of truth, and you see it in an intuitive web app. 

On top of that, neptune.ai integrates with any MLOps stack, and it just works. 

The same idea actually stands behind ZenML. It’s a technology-agnostic, open-source pipelines framework that’s easy to plugin and just works. 

Naturally, we joined forces and worked on the Neptune-ZenML integration to make the user experience even better. Now, with less boilerplate code, you can log and visualize information from your ZenML pipeline steps (e.g., models, parameters, metrics).

Here’s what the results look like in the Neptune app:

See example in the app

We’ll show you how to get to this dashboard in a sec. 

neptune.ai + ZenML: Why use them together?

If you’ve been into MLOps even for 5 minutes, you probably already know that there’s no one correct way to go about it. It’s actually why both, neptune.ai and ZenML, focus a lot on integrating with various components of the MLOps tooling landscape. After all, the MLOps stack is a living thing – you should be able to scale it up or down and switch components without a hassle. 

So when working on this integration, we did some brainstorming to figure out who would benefit the most from the Neptune Experiment Tracker (provided with the Neptune-ZenML integration).

Checking one of those boxes means you’re definitely in this group:

  • You’ve already been using neptune.ai to track experiment results for your project and would like to continue doing so as you are incorporating MLOps workflows and best practices in your project through ZenML.
  • You’re looking for a more visually interactive way of navigating the results produced from your ZenML pipeline runs (e.g., models, metrics, datasets).
  • You’d like to connect ZenML to neptune.ai to share the artifacts and metrics logged by your pipelines with your team, organization, or external stakeholders. 
  • You’re just starting to build your MLOps stack, and you’re looking for both experiment tracking and pipeline authoring components. 

How does the Neptune-ZenML integration work? 

All right, it’s time to see how it actually works.

In this example, we log a simple ZenML pipeline to Neptune using the Experiment Tracker stack component. The pipeline consists of 4 simple steps, 2 of which use the Neptune-ZenML integration to log training and evaluation metadata. 

  • The example assumes that you have ZenML installed together with the Neptune integration. If it’s not the case, please refer to the documentation
  • To use neptune.ai, you also need to configure your API key token, as well as the project you want to log into. This can be done either by setting environment variables or by passing those values upon stack component registration (as command-line arguments).

If you want to see a full-fledged example which uses Neptune integration with Scikit-learn to train a simple regressor, head over to this GitHub repo.

Here, we’ll talk about the most important stuff. 

To use the Neptune Experiment Tracking flavor (provided by the Neptune-ZenML integration), you need to specify this fact either in the `step` decorator or in the configuration file (see listings below).

Option 1: Using arguments in the step decorator

@step(experiment_tracker="NEPTUNE_TRACKER_COMPONENT_NAME",)
def my_step() ...:
    ...

Option 2: Using configuration file (config.yaml)

steps:
  my_step:
	experiment_tracker: 
...

This will tell ZenML to instantiate and store Neptune run object. You can fetch it inside your step using our `get_neptune_run` function (see listing below). Once you have this object, you can pretty much log whatever metadata you would normally like to log. 

from zenml.integrations.neptune.experiment_trackers.run_state import get_neptune_run

@step(experiment_tracker="NEPTUNE_TRACKER_COMPONENT_NAME",)
def my_step() ...:
    neptune_run = get_neptune_run()
    


You can also tell ZenML to pass custom tags to the Neptune run object upon instantiation. Again, there are two ways to achieve this – code and config file (see listings below).

Option 1: Using arguments in the step decorator

from zenml.integrations.neptune.flavors import NeptuneExperimentTrackerSettings


settings = NeptuneExperimentTrackerSettings(tags={"your", "tags"})



@step(
   experiment_tracker="",
   settings={"experiment_tracker.neptune": settings},
)

def my_step() ...:
    ...

Option 2: Using configuration file (config.yaml)

steps:
  my_step:
	experiment_tracker: 
	settings:
  	experiment_tracker.neptune:
    	tags: ['your', 'tags']

Running the full example provided in the ZenML repository will log training and evaluation metadata to Neptune. 

Below are the results of such a pipeline run visible in the Neptune app. You can check this example here (no registration is needed). 

See example in the app

It really is that simple.


neptune.ai is an MLOps stack component for experiment tracking. So we’re constantly working on making it easy to integrate with other parts of the workflow.

It is already integrated with 25+ tools and libraries, and the list is growing. You can check our roadmap to see what’s currently under development.  

Was the article useful?

Thank you for your feedback!
Thanks for your vote! It's been noted. | What topics you would like to see for your next read?
Thanks for your vote! It's been noted. | Let us know what should be improved.

    Thanks! Your suggestions have been forwarded to our editors