30.8 C
New York
Monday, May 18, 2026

Pandas Isn’t Going Anyplace: Why It’s Nonetheless My Go-To for Knowledge Wrangling


studying knowledge science in 2020, Pandas was some of the fashionable instruments. Though new instruments deal with enhancing Pandas’ weaknesses in dealing with very massive datasets, I nonetheless use Pandas for a lot of knowledge cleansing, processing, and evaluation duties. Sure, Pandas provides me a tough time when working with billions of rows, however it’s positively greater than sufficient for working with something under that.

I see Pandas being utilized in not just for EDA or in notebooks but additionally in manufacturing programs.

On this article, I’ll go over some knowledge cleansing and processing operations to show how succesful Pandas is.

Let’s begin with the dataset, which accommodates inventory maintaining items (SKUs) and a search API responses for these SKUs.

import pandas as pd

search_results = pd.read_csv("search_results.csv")

search_results.head()

Search result’s an inventory of dictionaries and appears like this:

search_results.loc[0, "search_result"]

"[{'my_id': 'HBCV00007F5Y2B', 'distance': 1.0, 'entity': {}}, 
{'my_id': 'HBCV00007UPQBM', 'distance': 1.0, 'entity': {}}, 
{'my_id': 'HBCV00008I29IH', 'distance': 1.0, 'entity': {}}, 
{'my_id': 'HBCV00006U3ZYB', 'distance': 0.8961254358291626, 'entity': {}}, 
{'my_id': 'HBCV0000AFA4H6', 'distance': 0.8702399730682373, 'entity': {}}, 
{'my_id': 'HBCV00009CDGD4', 'distance': 0.86175537109375, 'entity': {}}, 
{'my_id': 'HBCV000046336T', 'distance': 0.8594968318939209, 'entity': {}}, 
{'my_id': 'HBCV00009QDZRT', 'distance': 0.8572311997413635, 'entity': {}}, 
{'my_id': 'HBCV00008E11P3', 'distance': 0.8553324937820435, 'entity': {}}, 
{'my_id': 'HBV00000C4IY6', 'distance': 0.8539167642593384, 'entity': {}}] 
... and 5 entities remaining"

As we see within the output, it’s not a correct listing of dictionary format due to the final half (“… and 5 entities remaining”). Additionally, it’s saved as a single string.

So as to make higher use of it, we have to convert it to a correct listing of dictionaries. The next line of code removes the final half by splitting the string at “…” and takes the primary break up.

search_results.loc[0, "search_result"].break up("...")[0].strip()

Nevertheless, the output remains to be a single string. We are able to use the built-in ast module of Python to transform it to an inventory:

import ast

res = ast.literal_eval(search_results.loc[0, "search_result"].break up("...")[0].strip())

res

[{'my_id': 'HBCV00007F5Y2B', 'distance': 1.0, 'entity': {}},
 {'my_id': 'HBCV00007UPQBM', 'distance': 1.0, 'entity': {}},
 {'my_id': 'HBCV00008I29IH', 'distance': 1.0, 'entity': {}},
 {'my_id': 'HBCV00006U3ZYB', 'distance': 0.8961254358291626, 'entity': {}},
 {'my_id': 'HBCV0000AFA4H6', 'distance': 0.8702399730682373, 'entity': {}},
 {'my_id': 'HBCV00009CDGD4', 'distance': 0.86175537109375, 'entity': {}},
 {'my_id': 'HBCV000046336T', 'distance': 0.8594968318939209, 'entity': {}},
 {'my_id': 'HBCV00009QDZRT', 'distance': 0.8572311997413635, 'entity': {}},
 {'my_id': 'HBCV00008E11P3', 'distance': 0.8553324937820435, 'entity': {}},
 {'my_id': 'HBV00000C4IY6', 'distance': 0.8539167642593384, 'entity': {}}]

We now have the search outcomes as a correct listing of dictionaries. This was just for a single row. We have to apply the identical operation to all SKUs (i.e. complete SKU column).

One possibility is to go over all of the rows in a for loop and carry out the identical operation. Nevertheless, this isn’t the best choice. We should always choose vectorized operations after we can. A vectorized operation principally means executing the code on all rows without delay.

On a single row, I used splitting to eliminate the final a part of the string however it didn’t work in a vectorized operation. A extra sturdy possibility appears to be utilizing a regex.

search_results.loc[:, 'search_result'] = search_results['search_result'].str.substitute(r"....*", "", regex=True).str.strip()

This code selects “…” and every part that comes after it and replaces them with nothing. In different phrases, it removes “… and 5 entities remaining” half.

We now have all of the rows within the search outcomes column as a correct listing of dictionaries.

search_results.loc[10, "search_result"]

"[{'my_id': 'HBCV00007F5Y2B', 'distance': 1.0, 'entity': {}},
 {'my_id': 'HBCV00007UPQBM', 'distance': 1.0, 'entity': {}},
 {'my_id': 'HBCV00008I29IH', 'distance': 1.0, 'entity': {}},
 {'my_id': 'HBCV00006U3ZYB', 'distance': 0.8961254358291626, 'entity': {}},
 {'my_id': 'HBCV0000AFA4H6', 'distance': 0.8702399730682373, 'entity': {}},
 {'my_id': 'HBCV00009CDGD4', 'distance': 0.86175537109375, 'entity': {}},
 {'my_id': 'HBCV000046336T', 'distance': 0.8594968318939209, 'entity': {}},
 {'my_id': 'HBCV00009QDZRT', 'distance': 0.8572311997413635, 'entity': {}},
 {'my_id': 'HBCV00008E11P3', 'distance': 0.8553324937820435, 'entity': {}},
 {'my_id': 'HBV00000C4IY6', 'distance': 0.8539167642593384, 'entity': {}}]"

They’re nonetheless saved as a string however I can simply convert them to an inventory utilizing the ast module, which I’ll do within the subsequent step.

What I’m fascinated with is the SKUs returned within the search outcomes. I’ll create a brand new column by extracting the SKUs within the dictionaries. I can entry them utilizing the “my_id” key of the dictionary.

There are 3 components of this operation:

  • Convert the search consequence string to listing utilizing the literal_eval perform
  • Extract SKU from the my_id key of the dictionary
  • Do that in an inventory comprehension to get SKUs from all of the dictionaries within the listing

We are able to do all these operations by making use of a lambda perform to all rows as follows:

search_results.loc[:, "result_skus"] = 
search_results["search_result"].apply(lambda x: [item['my_id'] for merchandise in ast.literal_eval(x)])

search_results.head()

Every row within the result_skus column accommodates an inventory of 10 SKUs. Let’s say I must have these 10 SKUs in several rows. For every row within the sku column, there shall be 10 rows created from the listing within the result_skus column. There’s a quite simple approach of doing this in Pandas, which is the explode perform.

knowledge = search_results[["sku", "result_skus"]].explode("result_skus", ignore_index=True)

knowledge.head()

We created a brand new dataframe with sku and result_skus column. The drawing under demonstrates what the explode perform does:

Take into account the alternative. We’ve got a dataframe as proven above however wish to have all outcomes for an sku in a single row.

We are able to use the groupby perform to group the rows by sku after which apply the listing perform on the result_skus column:

new_data = knowledge.groupby("sku", as_index=False)["result_skus"].apply(listing)

new_data.head()

This can get us again to the earlier step:

Utilizing the explode perform, we created a dataframe with a separate row for every sku within the result_skus column. What if we have to have them separated to totally different columns as a substitute of rows?

One possibility is to use the pd.Sequence perform to the result_skus column and concatenate the ensuing columns to the unique dataframe.

new_cols = new_data["result_skus"].apply(pd.Sequence)

new_data = pd.concat([new_data, new_cols], axis=1)

new_data.head()

Columns from 0 to 9 accommodates the ten SKUs within the result_skus column. This code utilizing the apply perform shouldn’t be a vectorized operation.

We’ve got another choice, which is vectorized and far sooner.

new_cols = pd.DataFrame(new_data["result_skus"].tolist())

new_data = pd.concat([new_data, new_cols], axis=1)

This code will give us the identical dataframe as above however a lot sooner.

I demonstrated a typical knowledge cleansing and processing job a knowledge scientist or analyst could encounter of their job. I’ve been within the subject for over 5 years and Pandas has at all times been sufficient to do what I would like apart from when working very massive datasets (e.g. billions of rows).

The instruments which can be higher match for such massive datasets have related syntax to Pandas. For instance, PySpark is type of a combination of Pandas and SQL. Polars is similar to Pandas by way of syntax. Thus, studying and practicind Pandas remains to be a extremely invaluable ability for anybody working within the knowledge science and AI area.

Thanks for studying.

Related Articles

Latest Articles