EzDev.org

pandas

Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more Python Data Analysis Library — pandas: Python Data Analysis Library


Converting a Pandas GroupBy object to DataFrame

I'm starting with input data like this

df1 = pandas.DataFrame( { 
    "Name" : ["Alice", "Bob", "Mallory", "Mallory", "Bob" , "Mallory"] , 
    "City" : ["Seattle", "Seattle", "Portland", "Seattle", "Seattle", "Portland"] } )

Which when printed appears as this:

   City     Name
0   Seattle    Alice
1   Seattle      Bob
2  Portland  Mallory
3   Seattle  Mallory
4   Seattle      Bob
5  Portland  Mallory

Grouping is simple enough:

g1 = df1.groupby( [ "Name", "City"] ).count()

and printing yields a GroupBy object:

                  City  Name
Name    City
Alice   Seattle      1     1
Bob     Seattle      2     2
Mallory Portland     2     2
        Seattle      1     1

But what I want eventually is another DataFrame object that contains all the rows in the GroupBy object. In other words I want to get the following result:

                  City  Name
Name    City
Alice   Seattle      1     1
Bob     Seattle      2     2
Mallory Portland     2     2
Mallory Seattle      1     1

I can't quite see how to accomplish this in the pandas documentation. Any hints would be welcome.


Source: (StackOverflow)

Convert list of dictionaries to Dataframe

I have a list of dictionaries like this:

[{'points': 50, 'time': '5:00', 'year': 2010}, 
{'points': 25, 'time': '6:00', 'month': "february"}, 
{'points':90, 'time': '9:00', 'month': 'january'}, 
{'points_h1':20, 'month': 'june'}]

and I want to turn this into a pandas dataframe like this:

points, time, year, month, points_h1

50, 5:00, 2010, NONE, NONE
25, 6:00, NONE, february, NONE
90, 9:00, NONE, january, NONE
NONE, NONE, NONE, june, 20

Order of the columns does not matter. Ultimately, the goal is to write this to a text file and this seems like the best solution I could find. How can I turn the list of dictionaries into a panda data frame as shown above?


Source: (StackOverflow)

How to iterate over rows in a DataFrame?

I have a DataFrames from pandas:

import pandas as pd
inp = [{'c1':10, 'c2':100}, {'c1':11,'c2':110}, {'c1':12,'c2':120}]
df = pd.DataFrame(inp)
print df

Output:

   c1   c2
0  10  100
1  11  110
2  12  120

Now I want to iterate over the rows of the above frame. For every row I want to be able to access its elements (values in cells) by the name of the columns. So, for example, I would like to have something like that:

for row in df.rows:
   print row['c1'], row['c2']

Is it possible to do that in pandas?

I found similar question. But it does not give me the answer I need. For example, it is suggested there to use:

for date, row in df.T.iteritems():

or

for row in df.iterrows():

But I do not understand what the row object is and how I can work with it.


Source: (StackOverflow)

How can I replace all the NaN values with Zero's in a column of a pandas dataframe

I have a dataframe as below

      itm Date                  Amount 
67    420 2012-09-30 00:00:00   65211
68    421 2012-09-09 00:00:00   29424
69    421 2012-09-16 00:00:00   29877
70    421 2012-09-23 00:00:00   30990
71    421 2012-09-30 00:00:00   61303
72    485 2012-09-09 00:00:00   71781
73    485 2012-09-16 00:00:00     NaN
74    485 2012-09-23 00:00:00   11072
75    485 2012-09-30 00:00:00  113702
76    489 2012-09-09 00:00:00   64731
77    489 2012-09-16 00:00:00     NaN

when I try to .apply a function to the Amount column I get the following error.

ValueError: cannot convert float NaN to integer

I have tried applying a function using .isnan from the Math Module I have tried the pandas .replace attribute I tried the .sparse data attribute from pandas 0.9 I have also tried if NaN == NaN statement in a function. I have also looked at this article How do I replace NA values with zeros in R? whilst looking at some other articles. All the methods I have tried have not worked or do not recognise NaN. Any Hints or solutions would be appreciated.


Source: (StackOverflow)

pandas: filter rows of DataFrame with operator chaining

Most operations in pandas can be accomplished with operator chaining (groupby, aggregate, apply, etc), but the only way I've found to filter rows is via normal bracket indexing

df_filtered = df[df['column'] == value]

This is unappealing as it requires I assign df to a variable before being able to filter on its values. Is there something more like the following?

df_filtered = df.mask(lambda x: x['column'] == value)

Source: (StackOverflow)

Pandas timeseries plot setting x-axis major and minor ticks and labels

I want to be able to set the major and minor xticks and their labels for a time series graph plotted from a Pandas time series object.

The Pandas 0.9 "what's new" page says:

"you can either use to_pydatetime or register a converter for the Timestamp type"

but I can't work out how to do that so that I can use the matplotlib ax.xaxis.set_major_locator and ax.xaxis.set_major_formatter (and minor) commands.

If I use them without converting the pandas times, the x-axis ticks and labels end up wrong.

By using the 'xticks' parameter I can pass the major ticks to pandas.plot, and then set the major tick labels. I can't work out how to do the minor ticks using this approach. (I can set the labels on the default minor ticks set by pandas.plot)

Here is my test code:

import pandas
print 'pandas.__version__ is ', pandas.__version__
print 'matplotlib.__version__ is ', matplotlib.__version__    

dStart = datetime.datetime(2011,5,1) # 1 May
dEnd = datetime.datetime(2011,7,1) # 1 July    

dateIndex = pandas.date_range(start=dStart, end=dEnd, freq='D')
print "1 May to 1 July 2011", dateIndex      

testSeries = pandas.Series(data=np.random.randn(len(dateIndex)),
                           index=dateIndex)    

ax = plt.figure(figsize=(7,4), dpi=300).add_subplot(111)
testSeries.plot(ax=ax, style='v-', label='first line')    

# using MatPlotLib date time locators and formatters doesn't work with new
# pandas datetime index
ax.xaxis.set_minor_locator(matplotlib.dates.WeekdayLocator(byweekday=(1),
                                                           interval=1))
ax.xaxis.set_minor_formatter(matplotlib.dates.DateFormatter('%d\n%a'))
ax.xaxis.grid(True, which="minor")
ax.xaxis.grid(False, which="major")
ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('\n\n\n%b%Y'))
plt.show()    

# set the major xticks and labels through pandas
ax2 = plt.figure(figsize=(7,4), dpi=300).add_subplot(111)
xticks = pandas.date_range(start=dStart, end=dEnd, freq='W-Tue')
print "xticks: ", xticks
testSeries.plot(ax=ax2, style='-v', label='second line',
                xticks=xticks.to_pydatetime())
ax2.set_xticklabels([x.strftime('%a\n%d\n%h\n%Y') for x in xticks]);
# set the text of the first few minor ticks created by pandas.plot
#    ax2.set_xticklabels(['a','b','c','d','e'], minor=True)
# remove the minor xtick labels set by pandas.plot 
ax2.set_xticklabels([], minor=True)
# turn the minor ticks created by pandas.plot off 
# plt.minorticks_off()
plt.show()
print testSeries['6/4/2011':'6/7/2011']

and it's output:

pandas.__version__ is  0.9.1.dev-3de54ae
matplotlib.__version__ is  1.1.1
1 May to 1 July 2011 <class 'pandas.tseries.index.DatetimeIndex'>
[2011-05-01 00:00:00, ..., 2011-07-01 00:00:00]
Length: 62, Freq: D, Timezone: None

Graph with strange dates on xaxis

xticks:  <class 'pandas.tseries.index.DatetimeIndex'>
[2011-05-03 00:00:00, ..., 2011-06-28 00:00:00]
Length: 9, Freq: W-TUE, Timezone: None

Graph with correct dates

2011-06-04   -0.199393
2011-06-05   -0.043118
2011-06-06    0.477771
2011-06-07   -0.033207
Freq: D

Update: I've been able to get closer to the layout I wanted by using a loop to build the major xtick labels:

# only show month for first label in month
month = dStart.month - 1
xticklabels = []
for x in xticks:
    if  month != x.month :
        xticklabels.append(x.strftime('%d\n%a\n%h'))
        month = x.month
    else:
        xticklabels.append(x.strftime('%d\n%a'))

But this is a bit like doing the x-axis using ax.annotate, possible but not ideal.


Source: (StackOverflow)

Difference between map, applymap and apply methods in Pandas

Can you tell me when to use these vectorization methods with basic examples? I see that map is a Series method whereas the rest are DataFrame methods. I got confused about apply and applymap methods though. Why do we have two methods for applying a function to a DataFrame? Again, simple examples which illustrate the usage would be great!

Thanks!


Source: (StackOverflow)

Selecting columns

I have data in different columns but I don't know how to extract it to save it in another variable.

index  a   b   c
1      2   3   4
2      3   4   5

How do I select b, c and save it in to df1?

I tried

df1 = df['a':'b']
df1 = df.ix[:, 'a':'b']

None seem to work. Any ideas would help thanks.


Source: (StackOverflow)

Deleting DataFrame row in Pandas based on column value

I have the following DataFrame...

             daysago  line_race rating        rw    wrating
 line_date                                                 
 2007-03-31       62         11     56  1.000000  56.000000
 2007-03-10       83         11     67  1.000000  67.000000
 2007-02-10      111          9     66  1.000000  66.000000
 2007-01-13      139         10     83  0.880678  73.096278
 2006-12-23      160         10     88  0.793033  69.786942
 2006-11-09      204          9     52  0.636655  33.106077
 2006-10-22      222          8     66  0.581946  38.408408
 2006-09-29      245          9     70  0.518825  36.317752
 2006-09-16      258         11     68  0.486226  33.063381
 2006-08-30      275          8     72  0.446667  32.160051
 2006-02-11      475          5     65  0.164591  10.698423
 2006-01-13      504          0     70  0.142409   9.968634
 2006-01-02      515          0     64  0.134800   8.627219
 2005-12-06      542          0     70  0.117803   8.246238
 2005-11-29      549          0     70  0.113758   7.963072
 2005-11-22      556          0     -1  0.109852  -0.109852
 2005-11-01      577          0     -1  0.098919  -0.098919
 2005-10-20      589          0     -1  0.093168  -0.093168
 2005-09-27      612          0     -1  0.083063  -0.083063
 2005-09-07      632          0     -1  0.075171  -0.075171
 2005-06-12      719          0     69  0.048690   3.359623
 2005-05-29      733          0     -1  0.045404  -0.045404
 2005-05-02      760          0     -1  0.039679  -0.039679
 2005-04-02      790          0     -1  0.034160  -0.034160
 2005-03-13      810          0     -1  0.030915  -0.030915
 2004-11-09      934          0     -1  0.016647  -0.016647

I need to remove the rows where line_race is equal to zero. What's the most efficient way to do this?


Source: (StackOverflow)

Converting between datetime, Timestamp and datetime64

How do I convert a numpy.datetime64 object to a datetime.datetime (or Timestamp)?

In the following code, I create a datetime, timestamp and datetime64 objects.

import datetime
import numpy as np
import pandas as pd
dt = datetime.datetime(2012, 5, 1)
# A strange way to extract a Timestamp object, there's surely a better way?
ts = pd.DatetimeIndex([dt])[0]
dt64 = np.datetime64(dt)

In [7]: dt
Out[7]: datetime.datetime(2012, 5, 1, 0, 0)

In [8]: ts
Out[8]: <Timestamp: 2012-05-01 00:00:00>

In [9]: dt64
Out[9]: numpy.datetime64('2012-05-01T01:00:00.000000+0100')

Note: it's easy to get the datetime from the Timestamp:

In [10]: ts.to_datetime()
Out[10]: datetime.datetime(2012, 5, 1, 0, 0)

But how do we extract the datetime or Timestamp from a numpy.datetime64 (dt64)?

.

Update: a somewhat nasty example in my dataset (perhaps the motivating example) seems to be:

dt64 = numpy.datetime64('2002-06-28T01:00:00.000000000+0100')

which should be datetime.datetime(2002, 6, 28, 1, 0), and not a long (!) (1025222400000000000L)...


Source: (StackOverflow)

Python pandas, widen output display?

Is there a way to widen the display of output in either interactive or script-execution mode?

Specifically, I am using the describe() function on a Pandas dataframe. When the dataframe is 5 columns (labels) wide, I get the descriptive statistics that I want. However, if the dataframe has any more columns, the statistics are suppressed and something like this is returned:

>Index: 8 entries, count to max  
>Data columns:  
>x1          8  non-null values  
>x2          8  non-null values  
>x3          8  non-null values  
>x4          8  non-null values  
>x5          8  non-null values  
>x6          8  non-null values  
>x7          8  non-null values  

The "8" value is given whether there are 6 or 7 columns. What does the "8" refer to?

I have already tried dragging the IDLE window larger, as well as increasing the "Configure IDLE" width options, to no avail.

My purpose in using Pandas and describe() is to avoid using a second program like STATA to do basic data manipulation and investigation.

Thanks.

Python/IDLE 2.7.3
Pandas 0.8.1
Notepad++ 6.1.4 (UNICODE)
Windows Vista SP2


Source: (StackOverflow)

use a list of values to select rows from a pandas dataframe [duplicate]

Possible Duplicate:
how to filter the dataframe rows of pandas by “within”/“in”?

Lets say I have the following pandas dataframe:

df = DataFrame({'A' : [5,6,3,4], 'B' : [1,2,3, 5]})
df

     A   B
0    5   1
1    6   2
2    3   3
3    4   5

I can subset based on a specific value:

x = df[df['A'] == 3]
x

     A   B
2    3   3

But how can I subset based on a list of values? - something like this:

list_of_values = [3,6]

y = df[df['A'] in list_of_values]

Source: (StackOverflow)

How to drop rows of Pandas dataframe whose value of certain column is NaN

I have a df :

>>> df
                 STK_ID  EPS  cash
STK_ID RPT_Date                   
601166 20111231  601166  NaN   NaN
600036 20111231  600036  NaN    12
600016 20111231  600016  4.3   NaN
601009 20111231  601009  NaN   NaN
601939 20111231  601939  2.5   NaN
000001 20111231  000001  NaN   NaN

Then I just want the records whose EPS is not NaN, that is, df.drop(....) will return the dataframe as below:

                  STK_ID  EPS  cash
STK_ID RPT_Date                   
600016 20111231  600016  4.3   NaN
601939 20111231  601939  2.5   NaN

How to do that ?


Source: (StackOverflow)

Pandas: change data type of columns

I want to convert a table, represented as a list of lists, into a Pandas DataFrame. As an extremely simplified example:

a = [['a', '1.2', '4.2'], ['b', '70', '0.03'], ['x', '5', '0']]
df = pd.DataFrame(a)

What is the best way to convert the columns to the appropriate types, in this case columns 2 and 3 into floats? Is there a way to specify the types while converting to DataFrame? Or is it better to create the DataFrame first and then loop through the columns to change the type for each column? Ideally I would like to do this in a dynamic way because there can be hundreds of columns and I don't want to specify exactly which columns are of which type. All I can guarantee is that each columns contains values of the same type.


Source: (StackOverflow)

iterating row by row through a pandas dataframe [duplicate]

Possible Duplicate:
What is the most efficient way to loop through dataframes with pandas?

I'm looking to iterate row by row through a pandas DataFrame. The way I'm doing it so far is as follows:

for i in df.index:
    do_something(df.ix[i])

Is there a more performant and/or more idiomatic way to do this? I know about apply, but sometimes it's more convenient to use a for loop. Thanks in advance.


Source: (StackOverflow)