<> The first 4 chapter deformation
import numpy as np import pandas as pd df = pd.read_csv('data/table.csv') df.
head()
School Class ID Gender Address Height Weight Math Physics
0 S_1 C_1 1101 M street_1 173 63 34.0 A+
1 S_1 C_1 1102 F street_2 192 73 32.5 B+
2 S_1 C_1 1103 M street_2 186 82 87.2 B+
3 S_1 C_1 1104 F street_2 167 81 80.4 B-
4 S_1 C_1 1105 F street_4 159 64 84.8 B+
<> One , Pivot table

<>1. pivot

<>
In general state , Data in DataFrame Will be compressed (stacked) State storage , For example, above Gender, Two categories are superimposed in a column ,pivot Function to make a column new cols:
df.pivot(index='ID',columns='Gender',values='Height').head()
Gender F M
ID
1101 NaN 173.0
1102 192.0 NaN
1103 NaN 186.0
1104 167.0 NaN
1105 159.0 NaN
<> however pivot Functions have strong limitations , In addition to less function , Not yet values Duplicate row column index pair in (pair), For example, the following statement will report an error :
#df.pivot(index='School',columns='Gender',values='Height').head()
<> therefore , More often, they will choose to use powerful pivot_table function

<>2. pivot_table

<> first , Reproduce the above operation :
pd.pivot_table(df,index='ID',columns='Gender',values='Height').head()
Gender F M
ID
1101 NaN 173.0
1102 192.0 NaN
1103 NaN 186.0
1104 167.0 NaN
1105 159.0 NaN
<> Because of more functions , Naturally, the speed is not as good as the original pivot function :
%timeit df.pivot(index='ID',columns='Gender',values='Height') %timeit pd.
pivot_table(df,index='ID',columns='Gender',values='Height') 2.28 ms ± 74.8 µs
per loop (mean ± std. dev. of 7 runs, 100 loops each) 9.77 ms ± 498 µs per loop
(mean ± std. dev. of 7 runs, 100 loops each)
<>Pandas Various options are available in , Common parameters are described below :

<>① aggfunc: Aggregate statistics within the group , Various functions can be passed in , The default is ’mean’
pd.pivot_table(df,index='School',columns='Gender',values='Height',aggfunc=[
'mean','sum']).head()
mean sum
Gender F M F M
School
S_1 173.125000 178.714286 1385 1251
S_2 173.727273 172.000000 1911 1548
<>② margins: Aggregate marginal status
pd.pivot_table(df,index='School',columns='Gender',values='Height',aggfunc=[
'mean','sum'],margins=True).head() #margins_name You can set the name , The default is 'All'
mean sum
Gender F M All F M All
School
S_1 173.125000 178.714286 175.733333 1385 1251 2636
S_2 173.727273 172.000000 172.950000 1911 1548 3459
All 173.473684 174.937500 174.142857 3296 2799 6095
<>③ That's ok , column , Values can be multi-level
pd.pivot_table(df,index=['School','Class'], columns=['Gender','Address'],
values=['Height','Weight'])
Height ... Weight
Gender F M ... F M
Address street_1 street_2 street_4 street_5 street_6 street_7 street_1 street_2
street_4 street_5 ... street_4 street_5 street_6 street_7 street_1 street_2
street_4 street_5 street_6 street_7
School Class
S_1 C_1 NaN 179.5 159.0 NaN NaN NaN 173.0 186.0 NaN NaN ... 64.0 NaN NaN NaN
63.0 82.0 NaN NaN NaN NaN
C_2 NaN NaN 176.0 162.0 167.0 NaN NaN NaN NaN 188.0 ... 94.0 63.0 63.0 NaN NaN
NaN NaN 68.0 53.0 NaN
C_3 175.0 NaN NaN 187.0 NaN NaN NaN 195.0 161.0 NaN ... NaN 69.0 NaN NaN NaN
70.0 68.0 NaN NaN 82.0
S_2 C_1 NaN NaN NaN 159.0 161.0 NaN NaN NaN 163.5 NaN ... NaN 97.0 61.0 NaN NaN
NaN 71.0 NaN NaN 84.0
C_2 NaN NaN NaN NaN NaN 188.5 175.0 NaN 155.0 193.0 ... NaN NaN NaN 76.5 74.0
NaN 91.0 100.0 NaN NaN
C_3 NaN NaN 157.0 NaN 164.0 190.0 NaN NaN 187.0 171.0 ... 78.0 NaN 81.0 99.0
NaN NaN 73.0 88.0 NaN NaN
C_4 NaN 176.0 NaN NaN 175.5 NaN NaN NaN NaN NaN ... NaN NaN 57.0 NaN NaN NaN
NaN NaN NaN 82.0
7 rows × 24 columns

<>3. crosstab( Crosstab )

<> A crosstab is a special pivot table , Typical uses are grouping statistics , For example, if you want to count the frequency of street and gender groups :
pd.crosstab(index=df['Address'],columns=df['Gender'])
Gender F M
Address
street_1 1 2
street_2 4 2
street_4 3 5
street_5 3 3
street_6 5 1
street_7 3 3
<> Crosstabs are also powerful ( However, multi-level grouping is not supported at present ), Here are some important parameters :

<>① values and aggfunc: Groups aggregate some data , These two parameters must appear in pairs
pd.crosstab(index=df['Address'],columns=df['Gender'], values=np.random.randint(
1,20,df.shape[0]),aggfunc='min') # The default parameter is equal to the following method :
#pd.crosstab(index=df['Address'],columns=df['Gender'],values=1,aggfunc='count')
Gender F M
Address
street_1 6 4
street_2 10 5
street_4 6 2
street_5 10 8
street_6 9 4
street_7 8 4
<>② Except for the marginal parameter margins External , It also introduced normalize parameter , Optional ’all’,‘index’,'columns’ Parameter value
pd.crosstab(index=df['Address'],columns=df['Gender'],normalize='all',margins=
True)
Gender F M All
Address
street_1 0.028571 0.057143 0.085714
street_2 0.114286 0.057143 0.171429
street_4 0.085714 0.142857 0.228571
street_5 0.085714 0.085714 0.171429
street_6 0.142857 0.028571 0.171429
street_7 0.085714 0.085714 0.171429
All 0.542857 0.457143 1.000000
<> Two , Other deformation methods

<>1. melt

<>melt Function can be considered as pivot Inverse operation of function , take unstacked Status data , Compressed into stacked, send “ wide ” Of DataFrame change “ narrow ”
df_m = df[['ID','Gender','Math']] df_m.head()
ID Gender Math
0 1101 M 34.0
1 1102 F 32.5
2 1103 M 87.2
3 1104 F 80.4
4 1105 F 84.8 df.pivot(index='ID',columns='Gender',values='Math').head()
Gender F M
ID
1101 NaN 34.0
1102 32.5 NaN
1103 NaN 87.2
1104 80.4 NaN
1105 84.8 NaN
<>melt In function id_vars Represents a column that needs to be retained ,value_vars Indicate need stack A set of columns for
pivoted = df.pivot(index='ID',columns='Gender',values='Math') result = pivoted.
reset_index().melt(id_vars=['ID'],value_vars=['F','M'],value_name='Math')\ .
dropna().set_index('ID').sort_index() # Check whether it is consistent with that before deployment df identical , The intermediate steps of these chain methods can be expanded separately , See what happens
result.equals(df_m.set_index('ID')) True
<>2. Compression and expansion

<>(1)stack: This is the most basic deformation function , There are only two parameters in total :level and dropna
df_s = pd.pivot_table(df,index=['Class','ID'],columns='Gender',values=['Height'
,'Weight']) df_s.groupby('Class').head(2)
Height Weight
Gender F M F M
Class ID
C_1 1101 NaN 173.0 NaN 63.0
1102 192.0 NaN 73.0 NaN
C_2 1201 NaN 188.0 NaN 68.0
1202 176.0 NaN 94.0 NaN
C_3 1301 NaN 161.0 NaN 68.0
1302 175.0 NaN 57.0 NaN
C_4 2401 192.0 NaN 62.0 NaN
2402 NaN 166.0 NaN 82.0 df_stacked = df_s.stack() df_stacked.groupby('Class').
head(2)
Height Weight
Class ID Gender
C_1 1101 M 173.0 63.0
1102 F 192.0 73.0
C_2 1201 M 188.0 68.0
1202 F 176.0 94.0
C_3 1301 M 161.0 68.0
1302 F 175.0 57.0
C_4 2401 F 192.0 62.0
2402 M 166.0 82.0
<>stack Function can be seen as placing the horizontal index in the vertical direction , Therefore, the function is similar to that of melt, parameter level You can specify which level of the variable column index is ( Or which floor , Need list )
df_stacked = df_s.stack(0) df_stacked.groupby('Class').head(2)
Gender F M
Class ID
C_1 1101 Height NaN 173.0
Weight NaN 63.0
C_2 1201 Height NaN 188.0
Weight NaN 68.0
C_3 1301 Height NaN 161.0
Weight NaN 68.0
C_4 2401 Height 192.0 NaN
Weight 62.0 NaN
<>(2) unstack:stack Inverse function of , Similar in function to pivot_table
df_stacked.head()
Gender F M
Class ID
C_1 1101 Height NaN 173.0
Weight NaN 63.0
1102 Height 192.0 NaN
Weight 73.0 NaN
1103 Height NaN 186.0 result = df_stacked.unstack().swaplevel(1,0,axis=1).
sort_index(axis=1) result.equals(df_s) # Also in unstack Can be specified in level parameter True
<> Three , Dummy variable and factorization

<>1. Dummy Variable( Dummy variable )

<> Here is the main introduction get_dummies function , Its function is mainly to carry out one-hot code :
df_d = df[['Class','Gender','Weight']] df_d.head()
Class Gender Weight
0 C_1 M 63
1 C_1 F 73
2 C_1 M 82
3 C_1 F 81
4 C_1 F 64
<> Now you want to convert the first two columns of the table above into dummy variables , And add the third column Weight numerical value :
pd.get_dummies(df_d[['Class','Gender']]).join(df_d['Weight']).head()
# Optional prefix Parameter add prefix ,prefix_sep Add separator
Class_C_1 Class_C_2 Class_C_3 Class_C_4 Gender_F Gender_M Weight
0 1 0 0 0 0 1 63
1 1 0 0 0 1 0 73
2 1 0 0 0 0 1 82
3 1 0 0 0 1 0 81
4 1 0 0 0 1 0 64
<>2. factorize method

<> This method is mainly used for natural number coding , And missing values are recorded -1, among sort The parameter indicates whether to assign a value after sorting
codes, uniques = pd.factorize(['b', None, 'a', 'c', 'b'], sort=True) display(
codes) display(uniques) array([ 1, -1, 0, 2, 1]) array(['a', 'b', 'c'],
dtype=object)

Technology
©2019-2020 Toolsou All rights reserved,
about Navicat for mysql Of 2003 error 【JAVA】【 Huawei campus recruitment written examination - Software 】2020-09-09 Forbes China Auto rich list : He xiaopengdi 11 Li Xiangdi 14 Li Bindi 15keras Data generator -- Data enhancement el-select Get selected label value Python realization switch method The project followed for a year , The customer finally said no ( Essence )2020 year 6 month 26 day C# Class library File read and write operation help class be based on STM32 Design of infrared obstacle avoidance car ( There is a code )Golang Array bisection , Array split , Array grouping