In this three-part series, we’ve been detailing a tiered approach to introducing and incorporating data science into your organization. In Part One: Crawl and Part Two: Walk, we discussed how to get started from scratch and start building out a dedicated data science program. Today, we’ll dive into the third and final phase to see how to grow quality, centralize governance, incorporate user feedback, and more.
Part three: We’re running!
We’ve got models
Once your organization is producing a lot of models regularly and has models that are in production, there needs to be a strategy for how to maintain them, stay aware of their performance, and make changes as necessary. When this happens, it’s time to run.
Of course, this isn’t the entire list of things you need to consider, but some of the things we’ll focus on in this phase are:
- continued quality assessments,
- governance for central awareness and accountability,
- care and maintenance of models, and
- user feedback.
While many of these topics come into play during the Run phase, it’s important to be aware of these things during the Walk and even Crawl phases. Knowing that you’ll need to address them in the future will help you determine how you want to architect your data science workflow early on.
Continued quality
Part of the model assessment happens before you release the models. You’ve got data to test the models on and hopefully before you release, you’re getting good results. But you’re not done after you deploy the models. Model performance should continue to be evaluated as new data is added and as the number of models in your portfolio increases. Implementing an automated alert system to watch for evaluation metric deterioration is an excellent way to monitor performance. Poorly performing models should be assessed and updated or removed if they have reached the end of their lifespan.
Planning for updates is important. For some models, the data may not change very often, and updates may not need to be as frequent. Some updating can be planned, but other models will need to be updated for unplanned events that create large-scale change. Events like the recent COVID-19 pandemic most likely drastically affected many models that depend on human behavior.
The pandemic may have far-reaching, long-term effects on data models. Determining whether the pre-pandemic data will continue to be useful for training models, what to do with data collected during the pandemic, and how both will affect forecasts on future data is probably highly dependent on the particular context of the model in question.
Centralized governance
With all the models being used in production, there will need to be some centralized governance to be a single source of truth and guidance.
Governance should include a definition of the roles and responsibilities for each piece in the data science workflow, from model creation and validation to model approval, management, and deployment.
The governance should also ensure models and data usage follow legal and regulatory compliance. While this is not the most exciting part of the data science process, it is unquestionably an important part and can be a complete project showstopper. As part of legal compliance, information about who has access to the data and models (access control) should be tracked within the centralized governance. In addition to legal and regulatory compliance, a review for ethical compliance (also known as “responsible ML”) should be a part of the governance to ensure bias is not being included in the models at any point from the problem statement, raw data selection, and feature engineering to the final trained model.
Documentation is incredibly important and needs to be available and complete for all models. Any assumptions about the model, information about the data sources, expected inputs and outputs, and how the models are evaluated should be documented. Having a centralized location for this information helps people know where to go and also helps to enforce a consistent documentation style, which helps ensure the documentation is easy to understand and users know what to expect.
Additionally, there should be a framework around documenting/tracking/communicating when changes are made to a model and by whom. (This may be a person or even an automated system that re-trains models on a set schedule.) The ability to trace a result back to a specific model (where the result may not have come from a model that is currently in production) may be necessary as part of legal compliance.
It is beneficial to have this information collected in a centralized place where there is knowledge of whom is using these models and might be impacted by any changes so those impacts can be communicated. Centralized governance helps when situations arise like COVID-19; the models can be evaluated to determine which might be affected and need to be updated. It also allows the organization to quickly figure out who uses the models, who owns the models, and who will be responsible for making necessary updates.
Care and maintenance of models in production
As discussed in the governance section, in addition to general model updates, there also needs to be some discussion around who is responsible for managing the deployed models. Many of your models will change over time, and as an organization, you’ll need to determine a strategy for how many versions of the model you will continue to support and what the process will look like when you deprecate models.Deployments often need to be coordinated with other teams to ensure that model versions match what is expected by the services/dashboards/reports utilizing the models.
Once the models are in production, someone (or something) needs to ensure they are up and running. While you could wait for an end-user to report that your service is down, it may be possible for you to set up an alert system to send your team a notification if there’s a system failure. Depending on how large your organization is, tracking your services may be the responsibility of the data science team or it may be a dev-ops team that tracks the services in a larger organization. In addition to the general status of the model, there should be a determination of who is responsible for ensuring the services run and the models scale appropriately.
User feedback
It’s important to remember the end user’s feedback should be part of the data science workflow too. If your organization is developing great models and they aren’t being used, it’s important to figure out why.
One of the first steps is to identify which models are being used and which aren’t being used. Check with the users to see why models aren’t being used. In some cases, it could be that you have “bad models.” What defines a bad model? It could be that the models aren’t performing well at their designed task (hopefully this is something that could be detected before we get negative user feedback if you’ve got a good evaluation system with alerts set up). It could also be that they don’t answer the question that they were intended to answer. At this point, we may have to start the process over. And remember that’s OK! Building models is an iterative process.
Maybe your model isn’t bad though; it might be performing wonderfully when you look at evaluation metrics. So, why isn’t it being used? Sometimes this happens because users don’t know these models exist. Sometimes this happens because despite how good the model is, the end-user may not understand what the results mean or exactly how to use the results.
Documentation with examples and human-readable explanations is helpful in this case. Lunch and learn sessions or other training methods to help people use your models can improve utilization and satisfaction with the results.
Crawl, Walk, Run
Training is also a great way to make sure models are being used in expected ways. When models are applied incorrectly, it can result in users being frustrated with the results and might cause them to believe the model is performing poorly when it isn’t the model itself that is the issue. The Crawl, Walk, Run paradigm that we’ve explored in this three-part blog post series is only one of many ways you can introduce data science into your organization. The great news is there’s a lot of flexibility in how you implement each of these steps, and you can break the process into more phases or combine them into one phase if you want. Whether you’re already using data science in your organization, starting to build a team, or learning to build models, hopefully, you were able to gain a few nuggets to enhance what you’re already doing within your organization.
Get our take on industry trends
The missing piece to your Population Health strategy: A prescription for maximizing pharmacy data
The healthcare industry is swimming in data; sometimes organizations can even feel like they are drowning in available information. To…
Read on...Data, analytics and AI-enabled healthcare: Four key takeaways from the IS22 keynote
The 2022 Impact Summit (IS22) was outstanding for a plethora of reasons—and Dr. Lyle Berkowitz is foremost among them. For this year’s keynote, Lyle…
Read on...Three need-to-know trends in revenue cycle management
One of our favorite things about being back in-person is connecting with clients and colleagues at industry events! At the…
Read on...The role of analytics in whole person care: 4 key takeaways
David Schweppe, Chief Analytics Officer at MedeAnalytics (right), being interviewed by John Lynn, Founder and Chief Editor at Healthcare IT…
Read on...