
在处理大规模数据集时,我们经常需要根据不同的条件筛选数据,然后对这些子集进行聚合或计算,并将结果合并。传统的做法可能涉及多次筛选(df[condition])、创建临时dataframe,然后通过merge操作将这些结果拼接起来。这种方法在逻辑上虽然可行,但在面对复杂或多样的聚合需求时,会导致代码变得冗长、重复且难以维护。
考虑以下场景:我们需要从一个包含生物酶统计数据的数据框df_stats中,提取特定N值(如N=50, N=90)和区域类型('all', 'captured')的数据,并计算不同组合之间的长度差异。例如,计算captured N50与all N50之间的长度差异,以及captured N90与all N50之间的长度差异。如果采用逐个筛选和合并的方式,代码会如下所示:
import io
import pandas as pd
TESTDATA="""
enzyme regions N length
AaaI all 10 238045
AaaI all 20 170393
AaaI all 30 131782
AaaI all 40 103790
AaaI all 50 81246
AaaI all 60 62469
AaaI all 70 46080
AaaI all 80 31340
AaaI all 90 17188
AaaI captured 10 292735
AaaI captured 20 229824
AaaI captured 30 193605
AaaI captured 40 163710
AaaI captured 50 138271
AaaI captured 60 116122
AaaI captured 70 95615
AaaI captured 80 73317
AaaI captured 90 50316
AagI all 10 88337
AagI all 20 19144
AagI all 30 11030
AagI all 40 8093
AagI all 50 6394
AagI all 60 4991
AagI all 70 3813
AagI all 80 2759
AagI all 90 1666
AagI captured 10 34463
AagI captured 20 19220
AagI captured 30 15389
AagI captured 40 12818
AagI captured 50 10923
AagI captured 60 9261
AagI captured 70 7753
AagI captured 80 6201
AagI captured 90 4495
"""
df_stats = pd.read_csv(io.StringIO(TESTDATA), sep='\s+')
# 冗余的传统方法示例
df_cap_N90 = df_stats[(df_stats['N'] == 90) & (df_stats['regions'] == 'captured')].drop(columns=['regions', 'N'])
df_cap_N50 = df_stats[(df_stats['N'] == 50) & (df_stats['regions'] == 'captured')].drop(columns=['regions', 'N'])
df_all_N50 = df_stats[(df_stats['N'] == 50) & (df_stats['regions'] == 'all') ].drop(columns=['regions', 'N'])
df_summ_cap_N50_all_N50 = pd.merge(df_cap_N50, df_all_N50, on='enzyme', how='inner', suffixes=('_cap_N50', '_all_N50'))
df_summ_cap_N50_all_N50['cap_N50_all_N50'] = (df_summ_cap_N50_all_N50['length_cap_N50'] -
df_summ_cap_N50_all_N50['length_all_N50'])
df_summ_cap_N90_all_N50 = pd.merge(df_cap_N90, df_all_N50, on='enzyme', how='inner', suffixes=('_cap_N90', '_all_N50'))
df_summ_cap_N90_all_N50['cap_N90_all_N50'] = df_summ_cap_N90_all_N50['length_cap_N90'] - df_summ_cap_N90_all_N50['length_all_N50']
df_summ = pd.merge(df_summ_cap_N50_all_N50.drop(columns=['length_cap_N50', 'length_all_N50']),
df_summ_cap_N90_all_N50.drop(columns=['length_cap_N90', 'length_all_N50']),
on='enzyme', how='inner')
print("传统方法结果:")
print(df_summ)上述代码虽然实现了预期功能,但创建了多个中间DataFrame并进行了多次merge操作,这不仅降低了代码的可读性和维护性,也可能在处理大数据集时影响性能。
Pandas的pivot函数能够将DataFrame从“长格式”重塑为“宽格式”,这在需要基于多个分类变量进行聚合和比较时非常有用。结合向量化操作,我们可以极大地简化上述过程。
核心思路是:
以上就是Pandas高效数据聚合:利用Pivot与向量化操作简化复杂统计计算的详细内容,更多请关注php中文网其它相关文章!
每个人都需要一台速度更快、更稳定的 PC。随着时间的推移,垃圾文件、旧注册表数据和不必要的后台进程会占用资源并降低性能。幸运的是,许多工具可以让 Windows 保持平稳运行。
Copyright 2014-2025 https://www.php.cn/ All Rights Reserved | php.cn | 湘ICP备2023035733号