One of many largest challenges I encountered in my profession as an information scientist was migrating the core algorithms in a cellular AdTech platform from traditional machine studying fashions to deep studying. I labored on a Demand Facet Platform (DSP) for consumer acquisition, the place the function of the ML fashions is to foretell if exhibiting advert impressions to a tool will outcome within the consumer clicking on the advert and putting in a cellular app. For a fast hands-on overview of the press prediction downside, please take a look at my previous submit.
Whereas we have been in a position to shortly get to a state the place the offline metrics for the deep studying fashions have been aggressive with logistic regression fashions, it took awhile to get the deep studying fashions working easily in manufacturing, and we encountered many incidents alongside the way in which. We have been in a position to begin with small-scale assessments utilizing Keras for mannequin coaching and Vertex AI for managed TensorFlow serving, and ran experiments to match iterations of our deep studying fashions with our champion logistic regression fashions. We have been ultimately in a position to get the deep studying fashions to outperform the traditional ML fashions in manufacturing, and modernize our ML platform for consumer acquisition.
When working with machine studying fashions on the core of a fancy system, there are going to be conditions the place issues are going to go off the rails and it’s vital to have the ability to shortly get well and be taught from these incidents. Throughout my time at Twitch, we used the 5 W’s strategy to writing postmortems for incidents. The thought is to establish “what” went improper, “when” and “the place” it occurred, “who” was concerned, and “why” an issue resulted. The observe up is to then set up easy methods to keep away from this kind of incident sooner or later and to arrange guardrails to forestall comparable points. The aim is to construct a an increasing number of strong system over time.
In certainly one of my previous roles in AdTech, we bumped into a number of points when migrating from traditional ML fashions to deep studying. We ultimately bought to a state the place we had a sturdy pipeline for coaching, validating, and deploying fashions that improved upon our traditional fashions, however we bumped into incidents throughout this course of. On this submit we’ll cowl 8 of the incidents that occurred and describe the next steps we took for incident administration:
- What was the problem?
- How was it discovered?
- How was it fastened?
- What did we be taught?
We recognized quite a lot of root causes, however typically aligned on comparable options when making our mannequin pipelines extra strong. I hope sharing particulars about these incidents supplies some steerage on what can go improper when utilizing deep studying in manufacturing.
Incident 1: Untrained Embeddings
What was the problem?
We discovered that lots of the fashions that we deployed, resembling predicting click on and set up conversion, have been poorly calibrated. This meant that the anticipated worth of conversion by the mannequin was a lot greater than the precise conversion that we noticed for impressions that we served. After drilling down additional, we discovered that the miscalibration was worse on categorical options the place we had sparse coaching knowledge. Finally we found that we had embedding layers in our set up mannequin the place we had no coaching knowledge accessible for a few of the vocabulary entries. What this meant is that when becoming the mannequin, we weren’t making any updates to those entries, and the coefficients remained set to their randomized initialized weights. We referred to as this incident “Untrained Embeddings”, as a result of we had embedding layers the place a few of the layer weights by no means modified throughout mannequin coaching.
How was it discovered?
We principally found this difficulty by means of instinct after reviewing our fashions and knowledge units. We used the identical vocabularies for categorical options throughout two fashions, and the set up mannequin knowledge set was smaller than the press mannequin knowledge set. This meant that a few of the vocabulary entries that have been nice to make use of for the press mannequin have been problematic for the set up mannequin, as a result of a few of the vocab entries didn’t have coaching examples within the smaller knowledge set. We confirmed that this was the problem by evaluating the weights within the embedding layers earlier than and after coaching, and discovering {that a} subset of the weights have been unchanged after becoming the mannequin. As a result of we randomly initialized the weights in our Keras fashions, this led to points with the mannequin calibration on stay knowledge.
How was it fastened?
We first restricted the scale of our vocabularies used for categorical options to scale back the chance of this difficulty occurring. The second change we made was setting the weights to 0 for any embedding layers entries the place the weights have been unchanged throughout coaching. Long run, we moved away from reusing vocabularies throughout completely different prediction duties.
What did we be taught?
We found that this was one of many points that was resulting in mannequin instability, the place fashions with comparable efficiency on offline metrics would have noticeably completely different efficiency when deployed to manufacturing. We ended up constructing extra tooling to match mannequin weights throughout coaching runs as a part of our mannequin validation pipeline.
Incident 2: Padding Concern with Batching for TensorFlow Serving
What was the problem?
We migrated from Vertex AI for mannequin serving to an in-house deployment of TensorFlow serving, to take care of a few of the tail-end latency points that we have been encountering with Vertex on the time. When making this variation, we bumped into a problem with easy methods to take care of sparse tensors when enabling batching for TensorFlow serving. Our fashions contained sparse tensors for options, such because the checklist of identified apps put in on a tool, that could possibly be empty. After we enabled batching when serving on Vertex AI, we have been ready to make use of empty arrays with out difficulty, however for our in-house mannequin serving we bought error responses when utilizing batching and passing empty arrays. We ended up passing “[0]” values as a substitute of “[ ]” tensor values to keep away from this difficulty, however this once more resulted in poorly calibrated fashions. The core difficulty is that “0” referred to a particular app reasonably than getting used for out-of-vocab (OOV). We have been introducing a characteristic parity difficulty to our fashions, as a result of we solely made this variation for mannequin serving and never for mannequin coaching.
How was it discovered?
As soon as we recognized the change that had been made, it was simple to reveal that this padding strategy was problematic. We took information with an empty tensor and adjusted the worth from “[]” to “[0]” whereas holding all the different tensor values fixed, and confirmed that this variation resulted in numerous prediction values. This made sense, as a result of we have been altering the tensor knowledge to say that an app was put in on the system the place that was not really the case.
How was it fastened?
Our preliminary repair was to vary the mannequin coaching pipeline to carry out the identical logic that we carried out for mannequin serving, the place we change empty arrays with “[0]”, however this didn’t utterly handle this difficulty. We later modified the vocab vary from [0, n-1] to [0, n], the place 0 had no that means and was added to each tensor. This meant that each sparse tensor had not less than 1 worth and we have been ready to make use of batching with our sparse tensor setup.
What did we be taught?
This difficulty principally got here up as a consequence of completely different threads of labor on the mannequin coaching and mannequin serving pipelines, and lack of coordination. As soon as we recognized the variations between the coaching and serving pipelines, it was apparent that this discrepancy might trigger points. We labored to enhance on this incident by together with knowledge scientists as reviewers on pull requests on the manufacturing pipeline to assist establish these kinds of points.
Incident 3: Untrained Mannequin Deployment
What was the problem?
Early on in our migration to deep studying fashions, we didn’t have many guardrails in place for mannequin deployments. For every mannequin variant we have been testing we might retrain and routinely redeploy the mannequin each day, to be sure that the fashions have been educated on latest knowledge. Throughout one of many coaching runs, the mannequin coaching resulted in a mannequin that at all times predicted a 25% click on price whatever the enter knowledge and the ROC AUC metric on the validation knowledge set was 0.5. We had primarily deployed a mannequin to manufacturing that at all times predicted a 25% click on price no matter any of the characteristic inputs.
How was it discovered?
We first recognized the problem utilizing our system monitoring metrics in Datadog. We logged our click on predictions (p_ctr) as a histogram metric, and Datadog supplies p50 and p99 aggregations. When the mannequin was deployed, we noticed the p50 and p99 values for the mannequin converge to the identical worth of ~25%, indicating that one thing had gone improper with the press prediction mannequin. We additionally reviewed the mannequin coaching logs and noticed that the metrics from the validation knowledge set indicated a coaching error.
How was it fastened?
On this case, we have been in a position to rollback to the press mannequin from yesterday to resolve the problem, but it surely did take a while for the incident to be found and our rollback strategy on the time was considerably handbook.
What did we be taught?
We discovered that this difficulty with unhealthy mannequin coaching occurred round 2% of the time and wanted to arrange guardrails in opposition to deploying these fashions. We added a mannequin validation module to our coaching pipeline that checked for thresholds on the validation metrics, and in addition in contrast the outputs of the brand new and prior runs on the mannequin on the identical knowledge set. We additionally arrange alerts on Datadog to flag massive adjustments within the p50 p_ctr metric and labored on automating our mannequin rollback course of.
Incident 4: Dangerous Warmup Information for TensorFlow Serving
What was the problem?
We used warmup recordsdata for TensorFlow serving to enhance the rollout time of recent mannequin deployments and to assist with serving latency. We bumped into a problem the place the tensors outlined within the warmup file didn’t correspond to the tensors outlined within the TensorFlow mannequin, leading to failed mannequin deployments.
How was it discovered?
In an early model of our in-house serving, this mismatch between warmup recordsdata and mannequin tensor definitions would trigger all mannequin serving to return to a halt and require a mannequin rollback to stabilize the system. That is one other incident that was initially captured by our operational metrics on Datadog, since we noticed a big spike in mannequin serving error requests. We confirmed that there was a problem with the newly deployed mannequin by deploying it to Vertex AI and confirming that the warmup recordsdata have been the foundation reason behind the problem.
How was it fastened?
We up to date our mannequin deployment module to substantiate that the mannequin tensors and warmup recordsdata have been appropriate by launching a neighborhood occasion of TensorFlow serving within the mannequin coaching pipeline and sending pattern requests utilizing the warmup file knowledge. We additionally did extra handbook testing with Vertex AI when launching new forms of fashions with noticeably completely different tensor shapes.
What did we be taught?
We realized that we wanted to have completely different environments for testing TensorFlow mannequin deployments earlier than pushing them to manufacturing. We have been in a position to do some testing with Vertex AI, however ultimately arrange a staging atmosphere for our in-house model of TensorFlow serving to offer a correct CI/CD atmosphere for mannequin deployment.
Incident 5: Problematic Time-Based mostly Options
What was the problem?
We explored some time-based options in our fashions, resembling weeks_ago, to seize adjustments in conduct over time. For the coaching pipeline, this characteristic was calculated as flooring(date_diff(at the moment, day_of_impression)/7). It was a extremely ranked characteristic in a few of our fashions, but it surely additionally added unintended bias to our fashions. Throughout mannequin serving, this worth is at all times set to 0, since we’re making mannequin predictions in actual time, and at the moment is similar as day_of_impression. The important thing difficulty is that the mannequin coaching pipeline was discovering patterns within the coaching knowledge which will create bias points when making use of the mannequin on stay knowledge.
How was it discovered?
This was one other incident that we discovered principally by means of instinct and later confirmed to be an issue by evaluating the implementation logic throughout the coaching and mannequin serving pipelines. We discovered that the mannequin serving pipeline at all times set the worth to 0 whereas the coaching pipeline used a variety of values on condition that we frequently use months outdated examples for coaching.
How was it fastened?
We created a variant of the mannequin with all the relative time-based options eliminated and did an A/B check to match the efficiency of the variants. The mannequin that included the time primarily based options carried out higher on the holdout metrics throughout offline testing, however the mannequin with the options eliminated labored higher within the A/B check and we ended up eradicating the options from all the fashions.
What did we be taught?
We discovered that we had launched bias into our fashions in an unintended manner. The options have been compelling to discover, as a result of consumer conduct does change over time, and introducing these options did end in higher offline metrics for our fashions. Finally we determined to categorize these as problematic underneath the characteristic parity class, the place we see variations in values between the mannequin coaching and serving pipelines.
Incident 6: Suggestions Options
What was the problem?
We had a characteristic referred to as clearing_price that logged how excessive we have been keen to bid on an impression for a tool over the past time that we served an advert impression for the system. This was a helpful characteristic, as a result of it helped us to bid on units with a excessive bid flooring, the place the mannequin wants excessive confidence {that a} conversion occasion will happen. This characteristic by itself typically wasn’t problematic, but it surely did turn out to be an issue throughout an incident the place we launched unhealthy labels into our coaching knowledge set. We ran an experiment that resulted in false positives in our coaching knowledge set, and we began to see a suggestions difficulty the place the mannequin bias turned a problem.
How was it discovered?
This was a really difficult incident to establish the foundation reason behind, as a result of the experiment that generated the false optimistic labels was run on a small cohort of site visitors, so we didn’t see a sudden change in operational metrics like we did with a few of the different incidents in Datadog. As soon as we recognized which units and impressions have been impacted by this check, we appeared on the characteristic drift of our knowledge set and located that the typical worth of the clearning_price characteristic was rising steadily because the rollout of the experiment. The false positives within the label knowledge have been the foundation reason behind the incident, and the drift on this characteristic was a secondary difficulty that was inflicting the fashions to make unhealthy predictions.
How was it fastened?
Step one was to rollback to a best-known mannequin earlier than the problematic experiment was launched. We then cleaned up the info set and eliminated the false positives that we might establish from the coaching knowledge set. We continued to see points and in addition made the decision to take away the problematic characteristic from our fashions, just like the time-based options, to forestall this characteristic from creating future suggestions loops sooner or later.
What did we be taught?
We realized that some options are useful for making the mannequin extra assured in predicting consumer conversions, however are usually not well worth the danger as a result of they will introduce a tail-spin impact the place the fashions shortly deteriorate in efficiency and create incidents. To interchange the clearing value characteristic, we launched new options utilizing the minimal bid to win values from public sale callbacks.
Incident 7: Dangerous Characteristic Encoding
What was the problem?
We explored a number of options that have been numeric and computed as ratios, resembling the typical click on price of a tool, computed because the variety of clicks over the variety of impressions served to the system. We ran right into a characteristic parity difficulty the place we dealt with divide by zero in numerous methods between the coaching and serving mannequin pipelines.
How was it discovered?
We now have a characteristic parity test the place we log the tensors created throughout mannequin inference for a subset of impressions and run the coaching pipeline on these impressions and examine the values generated within the coaching pipeline with the logged worth at serving time. We observed a big discrepancy for the ratio primarily based options and located that we encoded divide by zero as -1 within the coaching pipeline and 0 within the serving pipeline.
How was it fastened?
We up to date the serving pipeline to match the logic within the coaching pipeline, the place we set the worth to -1 when a divide by zero happens for the ratio primarily based options.
What did we be taught?
Our pipeline for detecting characteristic parity points allowed us to shortly establish the foundation reason behind this difficulty as soon as the mannequin was deployed to manufacturing, but it surely’s additionally a scenario we wish to keep away from earlier than a mannequin is deployed. We utilized the identical studying from incident 2, the place we included knowledge scientists on pull request opinions to assist establish potential points between our coaching and serving mannequin pipelines.
Incident 8: String Parsing
What was the problem?
We used a 1-hot encoding strategy the place we select the highest okay values, that are assigned indices from 1 to okay, and use 0 as an out-of-vocab (OOV) worth. We bumped into an issue with the encoding from strings to integers when coping with categorical options resembling app bundle, which regularly has extra characters. For instance, the vocabulary could map the bundle com.dreamgames.royalmatch
to index 3, however within the coaching pipeline the bundle is about to com.dreamgames.royalmatch$hl=en_US
and the worth will get encoded to 0, as a result of it’s thought of OOV. The core difficulty we bumped into was completely different logic for sanitizing string values between the coaching and serving pipelines earlier than making use of vocabularies.
How was it discovered?
This was one other incident that we found with our characteristic parity checker. We discovered a number of examples the place one pipeline encoded the values as OOV whereas the opposite pipeline assigned non-zero values. We then in contrast the characteristic values previous to encoding and observed discrepancies between how we did string parsing within the coaching and serving pipelines.
How was it fastened?
Our quick time period repair was to replace the coaching pipeline to carry out the identical string parsing logic because the serving pipeline. Long run we targeted on truncating the app bundle names on the knowledge ingestion step, to scale back the necessity for handbook parsing steps within the completely different pipelines.
What did we be taught?
We realized that coping with problematic strings at knowledge ingestion offered essentially the most constant outcomes when coping with string values. We additionally bumped into points with unicode characters exhibiting up in app bundle names and labored to accurately parse these throughout ingestion. We additionally discovered it needed to sometimes examine the vocabulary entries which might be generated by the system to verify particular characters weren’t exhibiting up in entries.
Takeaways
Whereas it might be tempting to make use of deep studying in manufacturing for mannequin serving, there’s loads of potential points which you can encounter with stay mannequin serving. It’s vital to have strong plans in place for incident administration when working with machine studying fashions, so that you could shortly get well when mannequin efficiency turns into problematic and be taught from these missteps. On this submit we coated 8 completely different incidents I encountered when utilizing deep studying to foretell click on and set up conversion in a cellular AdTech platform. Right here’s are the important thing takeaways I realized from these machine studying incidents:
- It’s vital to log characteristic values, encoded values, tensor values, and mannequin predictions throughout mannequin serving, to make sure that you wouldn’t have characteristic parity or mannequin parity points in your mannequin pipelines.
- Mannequin validation is a needed step in mannequin deployment and check environments might help cut back incidents.
- Watch out for the options that you simply embrace in your mannequin, they might be introducing bias or inflicting unintended suggestions.
- In case you have completely different pipelines for mannequin coaching and mannequin serving, the staff members engaged on the pipelines must be reviewing one another’s pull requests for ML characteristic implementations.
Machine studying is a self-discipline that may be taught rather a lot from DevOps to scale back the incidence of incidents, and MLOps ought to embrace processes for effectively responding to points with ML fashions in manufacturing.