Walk: Bringing Data Science into your Organization

In this three-part series, we’re exploring a tiered approach to introducing and incorporating data science into your organization. In Part One: Crawl, we discussed how to get started from scratch. Today in Part Two: Walk, we’ll address issues that may emerge and how to overcome them, how to build out a dedicated data science team, and more.

Part two: We’re walking!

Where do we go from here?

You now have your toolsets identified for development. You developed your first model. You built some confidence that machine learning can solve some of your organization’s problems, and you’re starting to get more requirements for new model development. Now what? It’s time to walk.

There are a couple of issues that start to crop up at this point.

You’ve got to figure out how to build out a team to start handling all the new requests. We’ll talk briefly about some of the options you have for building a team, but that could be a whole topic of discussion itself.

Once you build a team, you must figure out the larger workflow process. You’ll have multiple people working on the code base, so you’ll need to figure out code management strategies. You’ll also likely need to figure out a review and approval workflow, as well as some quality gates to ensure the models you’re deploying meet expectations.

As you start producing more models, you also need to start thinking about how you’re going to get the results of all these models into your stakeholders’ hands. It would be a shame for all the hard work your team does to get stuck in the “data science lab” and never get used. Unfortunately, that is another barrier that many data science teams face. Ideally, the results will be available in an interactive format that responds to changes, but that may not always be possible. Let’s aim for that goal though.

Building a data science team

How you build out your data science team depends a lot on the structure of your company and how much open communication there is between teams. Some suggested configurations are listed below.

centralizedmatrixedembedded
Dedicated team of data scientists compartmentalized from the rest of the company.Dedicated team of data scientists where individuals are temporarily assigned to other teams, but return to work on special projects.Data scientists are permanently part of cross-functional teams.  

Keeping your data science team centralized allows for greater cohesion of the data science team and an increased sharing of best practices. It’s easier to ensure consistency with coding styles, code management, and quality requirements, as well.

Embedding data scientists allows for greater cohesion with the team focused on the specific project, though, and can help provide the contextual knowledge necessary for successful feature engineering and the building of quality models.

I like having a team that is somewhere in between centralized and matrixed where the data scientists are partially assigned to another team during the project lifetime but do not completely “leave” the centralized data science team. This helps ensure continued collaboration and discussion with fellow data scientists.

Model and code management

Inevitably, once you have more people working on code, you need to have some sort of model and code management system. If you’re not familiar with code repositories like Bitbucket or Github, they allow you to store your code in a central location and help guide your team through the development workflow.

These repositories feature version control to allow your team to track the changes made to the code base over time (and easily revert to previous versions if necessary). They also use access control to restrict who has access to the code and workflow tools to guide the code review processes through pull requests.

Make quality a priority

As your team picks up momentum and expands the development of models–especially if your organization comes to rely heavily on a “citizen data science” model where those creating the models may not have a data science background–it’s incredibly important to have a well-defined code review and quality assurance process.

Having a codebase that is maintainable and understandable is incredibly important. One way this can be achieved is to ensure the code is fully documented using a well-defined documentation style. A reliable codebase can be obtained through the implementation of peer reviews that look for both technical and logical correctness, as well as by developing unit tests that must pass successfully before the code is submitted for review.

In addition to having a well-documented, reliable codebase, you also want to make sure your models perform well. In the Walk phase, we talked about how to define whether a model is successful or not. As the team grows, there needs to be a well-defined evaluation process. Standardization of the evaluation process ensures all models are assessed consistently.

Operationalize models

There are many ways to operationalize your models. Which one works best depends a lot on how your organization is set up (self-hosted vs. cloud-hosted) and exactly what you’re trying to do with the results of the model (sending the results to another service vs. end-user directly accessing the results).

For algorithms that have pre-determined slicing/dicing possibilities, you may want to integrate the models directly into the ETL (extract, transform, load) process and make the results available alongside the source data to use in reports and dashboards. For models where it would be beneficial to have the ability to change input variables on the fly, you may want to use microservices to be able to make requests in real-time and see different results in your report/dashboard platform.

We’re well into the process now, but there’s one more phase to go. Stay tuned for the third blog post to learn about the final phase of bringing data science into your organization.

Editorial Team

MedeAnalytics is a leader in healthcare analytics, providing innovative solutions that enable measurable impact for healthcare payers and providers. With the most advanced data orchestration in healthcare, payers and providers count on us to deliver actionable insights that improve financial, operational, and clinical outcomes. To date, we’ve helped uncover millions of dollars in savings annually.

Get our take on industry trends

Crystal ball not necessary: predictive analytics helps health systems reduce denials

September 2, 2020

The idea of having a crystal ball to better understand what claims will be denied is an awesome concept. But one we can’t rely on. Thankfully, we have predictive analytics to take the place of a crystal ball.

Read on...

How did we get here? Hospital analytics and the new normal

July 15, 2020

I have heard the word “unprecedented” so many times in 2020 that it has lost its significance; many of us have become desensitized to the extraordinary changes in the world this year.

Read on...

How to help employer groups plan in a time of uncertainty

June 15, 2020

Employers and their sponsored health plans are thinking about next year’s benefit designs with a significant challenge not seen before: the effect of the coronavirus pandemic. There are important considerations to take into account before making any decisions about new or existing coverage. Becky Niehus, a director of Product Consulting at MedeAnalytics, explores these new issues and what employers can do to ensure employees are “covered.”

Read on...

Healthcare’s return to “normal” after COVID-19: Is it possible?

June 9, 2020

As providers determine how to get patients to return to facilities for routine disease management and preventive screenings, opportunities are ripe for the application of analytics to triage at the right time to the right setting. Data related to COVID-19 will continue to flow rapidly, but there are possibly more questions than answers now about a return to “normal.”

Read on...