Visit Lightning AI

Scaling Accelerated Drug Discovery with Grid

The Project:

SyntheticGestalt, an AI startup based in London and Tokyo, is developing an automatic system to make valuable drug discoveries en masse. Having received support from academic and governmental organizations in both the United Kingdom and Japan, they focus on the life sciences sector, developing machine learning models that make transformative discoveries such as novel drug candidate molecules and enzymes for the production of valuable molecules.

The SyntheticGestalt team learned about Grid after first experimenting with PyTorch Lightning.

The team runs machine learning algorithms and molecular simulations to validate potentially effective drug treatments. One of the most significant steps in their machine learning process is taking one-dimensional information about chemical molecules and proteins, which are just text strings, and converting them into information-rich vectors that represent their many properties. These vectors are then provided to the downstream models so that they have more information about the proteins and molecules, thus improving their predictions.

Because many of the machine learning models they develop aim to predict new chemical formulas, or to discover existing chemicals in datasets with hundreds of millions of data points, one of their biggest priorities is the ability to scale. SyntheticGestalt soon expects to predict hundreds of thousands to millions of these text strings, and would like to convert as many of them as possible into information-rich vectors.

In the simulation portion of SyntheticGestalt’s work, they convert molecule & protein information into 3D structures to test whether a target protein is likely to interact within a molecule in an effective way to target a specific disease. The simulation helps validate how well any given molecule and protein fit together. This simulation also requires a huge amount of sampling: it tests a wide variety of configurations and positions between molecule and protein to explore their binding interactions. There are many molecules to test against any given protein, and each of those molecules requires thousands of sampling steps.

The Challenge:

Finding a platform that allows them to scale easily has been critical to SyntheticGestalt’s success.

The team previously had difficulty running multiple jobs at the same time, and trying to scale caused time delays as they waited for the next job to become executable. Although they were able to hack together a workaround and run two experiments in parallel, this solution was not ideal and caused more complexity in their training strategy.

The Solution:

Grid instantly solved SyntheticGestalt’s main scaling issue. They were able to launch all their jobs at the same time, saving days, weeks and even over a month of work based on the workload they were running.

For example, they recently ran their largest set of confirmations to date of about 15,000 (the first step in their simulation process). If they had used their original pipeline, it would have taken nearly 40 days. With Grid, they were able to complete this job in a single day.

The SyntheticGestalt team doesn’t believe this would have been possible without Grid. Recent, cutting-edge research in the field of machine learning presents such scaling solutions as novel and far-reaching. What surprised the SyntheticGestalt team was that when they started working with Grid, they were able to quickly set up what they feel is equivalent to what is being theorized in this research.

With Grid, the team was able to simultaneously start all their experiments, setting them up as separate instances. They were then able to smoothly download and collect all the data back into their custom-built tree hierarchy structure.

“A 100 by 100 job we did (100 experiments with 100 different molecules) took only 4 hours in total. Prior to Grid we were running these experiments one at a time which would have taken 400 hours, which is just not feasible. Grid is a lot faster in every aspect.”


The team benefited from:

  • Running jobs in parallel to increase efficiency and save a massive amount of time 
  • More efficiently managing AWS usage to accomplish more without increasing costs 
  • A UI that makes it easy to monitor usage in order to keep costs down 
  • Greater access to more hyperparameter values allowing them to more easily adjust their models and boosting confidence in the quality of their output
  • Grid handling their infrastructure requirements
  • Using Spot instances to auto-resume without losing any data

“We would be really struggling to do this work without a platform like Grid. We’d basically need to come up with our own solution which would take a long time. Especially since none of us are experts in this kind of computing infrastructure. Grid has been a massive savings in time.”

 

Getting Started with Grid:

Interested in learning more about how Grid can help you manage machine learning model development for your next project? Get started with Grid’s free community tier account (and get $25 in free credits!) by clicking here. Also, explore our documentation and join the Slack community to learn more about what the Grid platform can do for you.

Solving Complex Macroeconomic Problems With Machine Learning

The Project:

Jesse Perla is an Associate Professor of Economics at The University of British Columbia, where he focuses on macroeconomics and machine learning. While using the open source PyTorch Lightning project to reduce boilerplate in his code, he found Grid.ai.

Macroeconomics examines the decisions of large numbers of workers, firms, and policymakers interacting through financial, labor, and other markets. Due to the complexity of solving these models and bringing them to data, macroeconomists are increasingly focusing on machine learning tools to expand the scale of models economists can estimate and solve.

Professor Perla’s team is working on these topics from several directions. One set of projects uses deep learning with PyTorch Lightning to solve high-dimensional macroeconomic models that would not otherwise be feasible. The second set of projects uses new techniques in Bayesian optimization with the Julia programming language to estimate more traditional macroeconomic models from data, increasing performance by several orders of magnitude.

The Challenge:

Before discovering Grid, the data science team had to engage experts outside their competency area, and managing this infrastructure took time away from their research questions.

They also tried running more complicated models on available clusters, which was only possible because of their institutional backing. Despite these additional resources, however, the difficulty in setting up these models meant they often ended up using their own laptops. Machine learning tools require a vast amount of CPU, GPU, RAM, and storage in order to run.

Even in cases where the models could be run on a desktop computer, the inability to run a huge number of small variations of the model and parameters (i.e., a hyperparameter sweep) significantly slowed development.

The Solution:

Grid allowed the team to leverage processing power on the cloud, from their laptops, with no additional setup. The code running on a laptop is the same that runs on the cloud. 

The team was able to quickly train models and obtain results faster than ever before. They simply pointed to their GitHub repo, pushed a button, and their work would begin through the Grid platform. 

Using Grid did not require them to learn Linux, AWS, Kubernetes or other time-intensive DevOps roles that are generally required to generally required to train models and get them ready for production. Grid also mirrors versioning on GitHub, which allows you to track your work as it progresses, something that is unavailable to those running models on their own.

This led to another benefit of using Grid: enabling students to collaborate by sharing the results of their work. Professor Perla appreciated how everything in Grid is reproducible, simple and clean. There was no need to rely on something that a grad student may have run on their personal laptop, or other work that would have been impractical to reproduce. They were able to get a model running locally and scale it straight to Runs. Those Runs were then available in the Grid dashboard for everyone on the team to see and manage.

Grid Runs - Tensorboard

Finally, Grid gave the team flexibility to work within different environments. Researchers from a wide variety of specializations use various languages and frameworks to work with data that is specific to their work. The team, for instance, relies heavily on the Julia programming language for some of its projects, which Grid also supports.

Grid is working well, my students are happy, and they’re writing code and running it, and that gives me great joy. Jesse Perla, Associate Professor of Economics, University of British Columbia

 

Grid Artifacts


The team benefited from:

  • Easy scalability – If it works on a laptop, it scales to the cloud without having to change any code 
  • Reproducibility – Easy to regenerate figures and visualizations 
  • Outstanding support from the Grid team 
  • Grid tying cleanly into the git versioning
  • Consistent uptime considering the complexity of the models they run
  • Ability to use the same cloud environment for different programming languages and platforms

Getting Started with Grid:

Interested in learning more about how Grid can help you manage machine learning model development for your next project? Get started with Grid’s free community tier account (and get $25 in free credits!) by clicking here. Also, explore our documentation and join the Slack community to learn more about what the Grid platform can do for you.

Using Grid to Deliver Models Into Production 50% Faster

The Project:

Podsights connects podcast downloads to on-site activity, giving advertisers and publishers unprecedented insights into the effectiveness of their podcast. Their overarching goal is to grow podcast advertising. Podsights has worked with almost 1,900 brands, the majority new to podcasting, to measure and scale their advertising.

To accomplish their goal of becoming the “operating system” of podcasting, Podsights created a Machine Learning Research & Development Team consisting of ML researchers Chase Bosworth and Victor Nazlukhanyan, working together with data analysts and API & Operations support.

Chase leads the Machine Learning Research & Development Team (ML R&D) as Machine Learning Engineering Manager. Her own research focuses on the NLP domain. She loves translating the rich conversational and storytelling podcast medium into insights via deep learning. Projects she works on include Brand Safety and Suitability and Ad Detection.

Victor was the second ML Engineer to join the ML R&D team at Podsights. His work includes researching and developing models relating to user segments, demographics, and conversion. The scope of his role is to holistically assess and address the breadth of machine learning methods that can be used to solve the problems at hand.

As Podsights seeks to grow both the headcount and project scope of their ML R&D team, with projects including vocal cloning, stylized text generation and content analytics, they needed a solution to fill in for their missing MLOps roles.

The Challenge:

Before Chase and Victor joined the team, Podsights was new to the machine learning space and lacked the experience necessary to put models into production. 

Podsights’ core feature, Podcast Attribution, didn’t rely on machine learning, and with the company’s sights set on broadening their offerings into media planning and beyond, the new ML team had to start from scratch. Despite being faced with a steep DevOps learning curve, prototyping and deployment were top of mind. Podsights recognized the importance of a tool that would eliminate the need to build MLOps infrastructure in-house.

The Solution:

Within a day of putting into use, Grid was already generating value for the ML R&D team. The learning curve wasn’t steep and support was extremely responsive, which reduced any misunderstanding or misuse of the platform.

The last mile problem in machine learning is a problem for everybody in the industry. A big challenge with many research teams and startups is not being able to easily add new team members as they scale production of their models. Using Grid, Podsights was able to develop high quality models without increasing the size of their team, moving from proof of concept to production ready 50% faster than industry standards. Grid makes the R&D process and rapid prototyping seamless and easy with a diversity of hardware accelerator configurations included with Runs. This helps Podsights automate, monitor and version models effortlessly.

Grid.ai Runs Feature

“Grid allowed us to work independently, completely self-sufficient, and be able to get models into production significantly faster than had we needed to invest in MLOps roles internally.” Chase Bosworth, Machine Learning Engineering Manager (Spotify x Podsights)

 

Grid has proven essential to the Podsights team: it works, it scales, it meets their needs. After nearly a year using Grid the two person Podsights team has now put almost 4 models into production.

The team benefited from:

  • Having a wide variety of hardware instances type where you can match to the right use case
  • Grid Runs makes hyperparameter tuning a breeze
  • Being able to pause and resume Sessions in order to save cost or switch gears
  • Datastores being a cleaner solution than changing up the dataset and having to download the new one during sessions
  • Using the UI over CLI. UI is extremely smooth to use and you always have the CLI as a backup if needed

“The intangible value of Grid: I am a happier data scientist because I get to focus on the stuff that I love to work on, and ultimately the reasons that they hired me, which is to research and develop models and translate real world problems in podtech into machine learning products. I think it presents a justification for machine learning engineers and data scientists to focus on what we were hired to do, rather than spinning our wheels on infrastructure.” Chase Bosworth, Machine Learning Engineering Manager (Spotify x Podsights)

 

Getting Started with Grid:

Interested in learning more about how Grid can help you manage machine learning model development for your next project? Get started with Grid’s free community tier account (and get $25 in free credits!) by clicking here. Also, explore our documentation and join the Slack community to learn more about what the Grid platform can do for you.

How Computer Vision Researchers Use Grid For Their Infrastructure Requirements

The Project:

Clutterbot is a New Zealand-based startup that builds autonomous cleaning robots. Syed Riaz, the Senior Machine Learning Engineer for Clutterbot, focuses on computer vision use cases to solve real world challenges. His expertise in object detection, tracking and multitask learning is helping the company develop a working prototype.

Regular cleaning is an essential task for improving health and reducing stress, but takes time away from our already busy lives. Clutterbot’s cleaning robot gives that time back so that users can focus on family or other personal activities. It also encourages a sustainable lifestyle by reducing the environmental impact of cleaning and organizing a space.

Clutterbot uses Grid to build a cleaning robot without worrying about MLOps infrastructure
The company uses state-of-the-art computer vision and artificial intelligence to navigate home environments, find toys/objects on the floor and organize them by category into containers. A mobile app houses and manages this entire process.

The Challenge:

The biggest challenge for Syed and his team was that they had to spend a considerable amount of time building an ML (Machine Learning) infrastructure, which was both costly and resource-intensive. After the team had completed their significant research and development phase, the need to build and manage this infrastructure prevented them from working on the project itself. Building this infrastructure on their own caused delays in prototyping new models, and implementing those technologies from scratch also grew into a costly and significant investment.

Not wanting to waste time and resources building out their own ML infrastructure led Clutterbot to look for a solution that would manage this facet of their pipeline.

The Solution:

A Clutterbot team member first learned about Grid after using PyTorch Lightning and was initially drawn to its convenient and easily-implemented features such as Sessions for quick prototyping and model training.

Grid Sessions are an interactive environment in which you can develop, analyze and prototype models or ideas on a live machine. Grid allocates the hardware you need on demand, so you only pay for what you need when you need it.

The Sessions feature, as well as its pre-installed tools, allowed the Clutterbot team to mount numerous GPUs automatically, enabling them to train their model faster. They were also able to pause their session on demand without losing any progress. Pausing a session, for example overnight or over the weekend, typically saves a considerable amount in training costs when nobody is available to monitor the session. Whenever you’re ready to resume, a simple click continues the session.

Because of Grid’s easily-implemented features, Clutterbot no longer had to worry about developing their own ML infrastructure from the ground up. This auto-restart feature allowed them to train their models on demand, whenever they needed to. That enabled them to avoid paying for training costs they didn’t need, as they would have with AWS. Instead, they were able to focus on solving the research, development, and business problems that mattered to them.

The team benefited from:

  • Not having to write a whole lot of code from scratch!
  • Flexible sessions which allowed them to spin up multiple GPUs and easily shut down when completed.
  • Not having to worry about the boilerplate, how to deal with CUDA and normal installation of a variety of drivers – everything is tied in with sessions.
  • Customizable settings that suit you and your team’s workflow.

Getting Started with Grid:

Interested in learning more about how Grid can help you manage deep learning model development for your next project? Get started with Grid’s free community tier account (and get $25 in free credits!) by clicking here. Also, explore our documentation and join the Slack community to learn more about what the Grid platform can do for you.

Avoid Building and Maintaining an MLOps Training Environment

The Company:

AutoDevTech helps teams write better code by understanding code coverage, churn, and engagement. Using sophisticated machine learning techniques, the AutoDevTech platform accelerates a team’s efficiency by providing valuable insights into their development process in the context of industry norms, systematically exposing teams to best practices from some of the most well-crafted software.

Nick Gerner, AutoDevTech’s CEO/Founder and builder of many engineering teams, and Bora Banjanin, AutoDevTech’s lead Applied Scientist are tasked with the complex task of preserving the knowledge from the past and providing that knowledge to new engineers.

[AutoDevTech’s Review Assistant]

The Challenge:

AutoDevTech’s objective is to help write and validate code, making engineers more efficient. When it came time to scale their machine learning training, they quickly realized they themselves needed to be more efficient. They saw using Grid as an opportunity to avoid the complexity of building and maintaining their own MLOps training environment.

Prior to using Grid, the AutoDevTech team focused on traditional classic statistical regression models. They wanted to leverage more sophisticated models and evaluated infrastructure solutions such as Horovod and Sagemaker which they realized would require a greater engineering effort to achieve the desired state-of-the-art performance.

“Looking into a service like Grid, we wanted to be using more sophisticated methods and the only way to do that was with large scale distributed training.” – Bora Banjanin, Applied Scientist, AutoDevTech

 

The team benefited from:

  • Training from laptop to cloud without code changes
  • Easily scale to a large number of clustered machines
  • Open-source software and the Open-source Community
  • Avoiding a complex MLOps project
  • Affordable and transparent pricing

Having enjoyed many of the benefits offered by the PyTorch Lightning platform and community, it was an easy decision for AutoDevTech to leverage Grid and have one team support all their ML lifecycle needs. PyTorch Lightning already provided a significant amount of simplification with Lightning Trainer, and given their plans to use DeepSpeed integration in the future, it was natural to work in the Grid platform.

“We can now turn out large experiments on large-scale distributed models, allowing my engineers to make decisions on what to do next. How big is the context, should it be big or smaller, etc. I don’t think we could get answers without using Grid.ai.” – Nick Gerner, CEO / Founder, AutoDevTech

 

The Solution:

The services Grid offered enabled their machine learning engineers to focus on machine learning. To compete and move to market faster, the team needed more sophisticated methods, and the only way to achieve their goals was through large-scale distributed training. The team continued to leverage their own AWS environment with Grid as a tenant in their Virtual Private Cloud (VPC). This allowed AutoDevTech to leverage its existing platform while also taking advantage of Grid.

 

Grid Datastore management was an important function of the platform. Using the ability to pull data directly from the Grid. Datastore simplified data management and sped up development.

The simplicity of Grid Runs enabled teams to quickly determine which resources were available and easily leverage Spot instances to deliver maximum value. With Auto-resume, the AutoDevTech team will be able to restart instances when Spot instances are reclaimed.

Using a platform to keep up with the quickly changing machine learning space justified Nick’s decision to move to Grid. Additionally, the support from their staff, the PyTorch Lightning and Grid communities have made Grid a key component in maximizing the product they are serving their customers. 

Getting Started with Grid:

Interested in learning more about how Grid can help you manage deep learning model development for your next project? Get started with Grid’s free community tier account (and get $25 in free credits!) by clicking here. Also, explore our documentation and join the Slack community to learn more about what the Grid platform can do for you.

How Neuroscientists Are Using AI To Understand Behavior

The Project

Dan Biderman is a Ph.D. candidate at Columbia University’s Center for Theoretical Neuroscience, advised by Statistics professors Liam Paninski and John Cunningham (both are also affiliated with the Grossman Center for the Statistics of Mind)

Dan is currently focusing on a project modeling animal behavior in videos, working to better understand how different brain regions (such as the cerebellum or the motor cortex) control natural movements. We connected with Dan to learn more about this project and how using Grid has enhanced their research.

The team began by tackling the well-known problem of pose estimation: how can we detect an animal’s joints from a raw video? With this information in hand, they hope to ask: What are the mechanical forces acting on each joint? What control strategies do biological agents use? 

Pose estimation is a crowded area of computer vision. The research relies mostly on supervised learning: researchers collect a large amount of manually annotated images and train a neural network to predict joint positions from those images. Dan’s team is taking a new semi-supervised approach: they observed that raw unannotated videos already contain a rich spatiotemporal structure that is ignored by standard approaches. They are developing a video-centric software package called Lightning Pose, that combines PyTorch Lightning and Nvidia DALI to efficiently load video sequences to GPU and use them to train convolutional neural networks. As of now, they focus on videos collected by neuroscience labs that track whole-body movements, arm reaches, and eye movements. Dan used his various Grid sessions to develop and test new statistical tricks to allow the network to make sense of unannotated video frames. Dan and his team hope that in the coming months, their package can also be used on top of Grid. This way, researchers in the neuroscience community would be able to reproduce their results, contribute new ideas to their package, and perform scalable pose estimation neuroscience projects without worrying about infrastructure.

 

The Challenge

Before discovering Grid, the team managed all their cloud experiments themselves on AWS instances using boto3 scripts and vanilla PyTorch. They used S3 buckets to store large amounts of videos and images and ran into challenges setting up environments and managing SSH and Git. “90% of our time was invested in just making sure code runs,” said Dan.

 

The Solution

Dan learned of Grid.ai in the PyTorch Lightning Documentation and became curious about the workflow, and how Grid could help solve their pipeline management problem. Once Dan had implemented Grid, he was able to prototype new models for pose estimation. The team now has less code than existing algorithms due to a reduction in boilerplate using PyTorch Lightning, an integration with Nvidia DALI, and a solid State-Of-The-Art pose estimation package to help with future neuroscience projects.

 

 

As their Grid Sessions evolve, Dan is looking forward to having an increasingly pre-configured environment. He is currently enjoying the VS code integration, and also loves the Grid Community, as it provides powerful support in an accessible timeframe. When things go wrong, there are other humans to work with.

Grid exceeded Dan’s expectations through the Jupyter Lab Community of developers, SSH login logout, and the ability to see the status of the compute. The interaction with Instances from Grid CLI is also more pleasant to work with than AWS, and getting stats updates is clear and efficient.

“If someone’s use case is prototyping models, where there is a lot of uncertainty around model architecture, getting started with Grid will be much faster than doing it yourself on a cloud provider. In contrast to a university cluster, being able to explicitly control which hardware you are using is beneficial.” – Dan Biderman, Ph.D. Student

 

With Grid, longer-term plans will evolve as more users in the lab use Grid for their projects. The ability to share model results, simplify data loading, and have one platform to capture results will become more streamlined.

 

Getting Started with Grid

Interested in learning more about how Grid can help you manage deep learning model development for your next project? Get started with Grid’s free community tier account (and get $25 in free credits!) by clicking here. Also, explore our documentation and join the Slack community to learn more about what Grid can do for you.

Helping Consultants Apply Modern Methods To Industrial Problems

The Company

Willows.ai is a Montreal-based team that comprises machine learning, AI, and software experts who develop and deliver full-stack AI solutions for the manufacturing industry. Manufacturing has become a proving ground for leveraging the benefits of machine learning, where companies use technology to optimize visual systems to monitor and increase safety, track progress and provide real-time status updates, and reduce common process-driven waste. With expertise in computer vision and building robust, explainable machine learning solutions, Willows.ai helps manufacturers apply the latest research to their business problems. 

 

The Problem

Prior to discovering Grid, the Willows.ai team managed a complex system to compare and analyze the results of their machine learning operations. This complexity often led to duplication of work and ballooning computational costs. 

Willows knew that if they wanted to scale, they needed to avoid wasting precious time worrying about cloud infrastructure. They wanted a platform that freed up resource time and delivered results faster for both them and their clients. 

 

Solution 

The Willows team discovered Grid.ai through the open source PyTorch Lightning community. The Grid platform, developed by the same team behind PyTorch Lightning, provided Willows with the capabilities they needed to manage scalable ML workflows. “We were able to solve our need to deliver more value by providing massive scale training to our clients,” explained Dr. Andrew Marble. It is extremely valuable for us to manage and see all the experiments that we create within a single dashboard view.” The Grid platform provides a unified view for managing run metrics, logs, and artifacts.

 

grid platform

 

What Willows loved most about Grid is its Hyperparameter Optimization functionality that enabled them to scale variations of their models in parallel without needing to change their code or take on external dependencies. 

 

“Being able to parameterize and add parameters to your model is where the real value [for Willows] is at. Grid’s Hyperparameter Sweeps are the most interesting feature to me”, Dr. Andrew Marble, Principal, Willows.ai.

 

Grid’s ability to manage interruptible compute such as spot instances through the click of a button reduced training costs for Willow.ai. Receiving real-time estimates before, during, and after, running experiments with Grid provided Willows with the transparency it needed to manage costs.

willows runs

Getting Started With Grid.ai

Interested in learning more about how Grid can help you manage deep learning model development for your next project? Get started with Grid’s free community tier account (and get $25 in free credits!) by clicking here. Also, explore our documentation and join the Slack community to learn more about what the Grid platform can do for you.

 

Get to know more about Willows.ai, and connect with Dr. Andrew Marble via email at andrew@willows.ai

Managing Machine Learning With Limited Resources

Machine Learning Made Easier With Grid.ai

In the race to implement machine learning, businesses of all sizes look for more accurate predictions that will allow them to stay ahead and differentiate themselves from the competition. One of the main roadblocks in getting there is the availability and affordability of machine learning resources. In this post, we speak with Felix Dittrich, the lead Machine Learning Developer at Memoresa to discuss how Grid.ai solved some of their ML challenges.

 

Discovering Grid

Based in Leipzig, Germany, Memoresa is an online platform and mobile app for easy estate planning, secure emergency provision, and digital organization with a system. Memoresa prides itself on helping estates organize their life documents to allow peace of mind for the organizing and planning of life.

Before finding Grid, Felix and his team felt challenged by all the steps related to training machine learning management, mainly due to their limited number of developers, capabilities, and resources brought on by a small team.

Memoresa captures image documents in their application; the ML engineer uses Grid to train their model to capture metadata for each user and auto-complete forms that are required to simplify the onboarding process. “We used Grid to train a custom named entity recognition model using Transformers and PyTorch Lightning as well as Onnx for quantization,” mentioned Felix. 

While using the open-source PyTorch Lightning project to eliminate boilerplate in his code, Felix discovered Grid.ai. Founded by the creators of PyTorch Lightning, Grid is a platform designed to develop and train deep learning models at scale. The team at Memoresa needed an easy way to coordinate all the steps in training and managing machine learning models. Grid made it easy for Felix’s team to address this, as well as version control all their training data and model artifacts out of the box.

 

Memoresa 1

Exceeded Expectations!

Implementing Grid exceeded Felix and his team’s expectations. “All the steps related to end-to-end model training can be managed in one place. When my code runs in Sessions, I can scale it with Runs without any code changes.”

For start-ups like Memoresa, this means that it takes less time to go from an innovative prototype to a top-performing model. Felix remarked that he likes “how easily Grid Sessions enables me to prototype and debug my models and how I can scale my Session code with different hyperparameters configurations with Runs without any code modifications.”

“I love that Grid supports automatic versioning of datastores and model artifacts. ​Grid makes it simple to share datasets and model assets and code. As our company scales, it’s straightforward to introduce new teammates for collaboration.” This means that start-ups such as Memoresa can easily share the outcomes of their machine learning experiments reducing the time from expiration to business value.

Memoresa 2

 

Memoresa 3

Getting Started With Grid.ai

Interested in learning more about how Grid can help you manage deep learning model development for your next project? Get started with Grid’s free community tier account (and get $25 in free credits!) by clicking here. Also, explore our documentation and join the Slack community to learn more about what the Grid platform can do for you.

Grid Platform