Content from Python Fundamentals
Last updated on 2024-02-23 | Edit this page
Estimated time: 30 minutes
Overview
Questions
- What basic data types can I work with in Python?
- How can I create a new variable in Python?
- How do I use a function?
- Can I change the value associated with a variable after I create it?
Objectives
- Assign values to variables.
Variables
Any Python interpreter can be used as a calculator:
OUTPUT
23
This is great but not very interesting. To do anything useful with
data, we need to assign its value to a variable. In Python, we
can assign a value to a variable, using the equals sign
=
. For example, the GDP per capita of the UK is
approximately $46510. We could track this by assigning the value
46510
to a variable gdp_per_capita
:
From now on, whenever we use gdp_per_capita
, Python will
substitute the value we assigned to it. In layperson’s terms, a
variable is a name for a value.
In Python, variable names:
- can include letters, digits, and underscores
- cannot start with a digit
- are case sensitive.
This means that, for example:
-
gdp_per_capita_2021
is a valid variable name, whereas2021_gdp_per_capita
is not. -
gdp_per_capita
andGDP_per_capita
are different variables.
Types of data
Python knows various types of data. Three common ones are:
- integer numbers,
- floating point numbers, and
- strings.
In the example above, variable gdp_per_capita
has an
integer value of 46510
. If we want to more precisely track
the GDP of the UK, we can use a floating point value by executing:
To create a string, we add single or double quotes around some text. We could track the language code of a country by storing it as a string:
Using Variables in Python
Once we have data stored with variable names, we can make use of it in calculations. We may want to store our country’s raw GDP value as well as the GDP per capita:
We also might decide to add a prefix to our language identifier:
Built-in Python functions
To carry out common tasks with data and variables in Python, the
language provides us with several built-in functions. To display information to
the screen, we use the print
function:
OUTPUT
46510.28
ISO_eng
When we want to make use of a function, referred to as calling the
function, we follow its name by parentheses. The parentheses are
important: if you leave them off, the function doesn’t actually run!
Sometimes you will include values or variables inside the parentheses
for the function to use. In the case of print
, we use the
parentheses to tell the function what value we want to display. We will
learn more about how functions work and how to create our own in later
episodes.
We can display multiple things at once using only one
print
call:
OUTPUT
ISO_eng GDP per capita is USD $ 46510.28
We can also call a function inside of another function call. For example,
Python has a built-in function called type
that tells you a
value’s data type:
OUTPUT
<class 'float'>
<class 'str'>
Moreover, we can do arithmetic with variables right inside the
print
function:
OUTPUT
GDP in USD $ 3131537152400.0
The above command, however, did not change the value of
gdp_per_capita
:
OUTPUT
46510.28
To change the value of the gdp_per_capita
variable, we
have to assign gdp_per_capita
a new value
using the equals =
sign:
OUTPUT
GDP per capita is now: 46371.45
Variables as Sticky Notes
A variable in Python is analogous to a sticky note with a name written on it: assigning a value to a variable is like putting that sticky note on a particular value.
Using this analogy, we can investigate how assigning a value to one variable does not change values of other, seemingly related, variables. For example, let’s store the country’s GDP in its own variable:
PYTHON
# There are 67330000 people in the UK
gdp = 67330000 * gdp_per_capita
print('GDP per capita: USD $', gdp_per_capita, 'Raw GDP: USD $', gdp)
OUTPUT
GDP per capita: USD $ 46371.45 Raw GDP: USD $ 3122189728500.0
Everything in a line of code following the ‘#’ symbol is a comment that is ignored by Python. Comments allow programmers to leave explanatory notes for other programmers or their future selves.
Similar to above, the expression
67_330_000 * gdp_per_capita
is evaluated to
3122189728500.0
, and then this value is assigned to the
variable gdp
(i.e. the sticky note gdp
is
placed on 3122189728500.0
). At this point, each variable is
“stuck” to completely distinct and unrelated values.
Let’s now change gdp_per_capita
:
PYTHON
gdp_per_capita = 45_000.00
print('GDP per capita is now: USD $', gdp_per_capita, 'But raw GDP is still: USD $', gdp)
OUTPUT
GDP per capita is now: USD $ 45000.0 But raw GDP is still: USD $ 3122189728500.0
Since gdp
doesn’t “remember” where its value comes from,
it is not updated when we change gdp_per_capita
.
OUTPUT
`mass` holds a value of 47.5, `age` does not exist
`mass` still holds a value of 47.5, `age` holds a value of 122
`mass` now has a value of 95.0, `age`'s value is still 122
`mass` still has a value of 95.0, `age` now holds 102
OUTPUT
Hopper Grace
Key Points
- Basic data types in Python include integers, strings, and floating-point numbers.
- Use
variable = value
to assign a value to a variable in order to record it in memory. - Variables are created on demand whenever a value is assigned to them.
- Use
print(something)
to display the value ofsomething
. - Use
# some kind of explanation
to add comments to programs. - Built-in functions are always available to use.
Content from Reading Tabular Data into DataFrames
Last updated on 2024-02-23 | Edit this page
Estimated time: 60 minutes
Overview
Questions
- How can I read tabular data in Python?
- How can I get information about the type of data I have read in?
Objectives
- Explain what a library is and what libraries are used for.
- Import a Python library (
pandas
) and use the functions it contains. - Read tabular data from a file into a program.
- Select individual values and subsections from data.
- Get some basic information about a Pandas DataFrame.
- Perform operations on arrays of data.
Words are useful, but what’s more useful are the sentences and stories we build with them. Similarly, while a lot of powerful, general tools are built into Python, specialized tools built up from these basic units live in libraries that can be called upon when needed.
Loading data into Python using the Pandas library.
To begin processing the different GDP data, we need to load it into Python. We can do that using a library called pandas, which is a widely-used Python library for statistics, particularly on tabular data. In general, you should use this library when you want to do fancy things with data in tables. To tell Python that we’d like to start using pandas, we need to import it:
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program.
Additionally, it’s common to use an alias when importing a library to
safe some typing. In the case of pandas, the alias used is
pd
. Therefore, the importing command would become:
Once we’ve imported the library, we can ask the library to read our data file for us:
OUTPUT
country 1952 1957 ... 1997 2002 2007
0 Australia 10039.59564 10949.64959 ... 26997.93657 30687.75473 34435.36744
1 New Zealand 10556.57566 12247.39532 ... 21050.41377 23189.80135 25185.00911
[2 rows x 13 columns]
The expression pd.read_csv(...)
is a function call that asks Python
to run the function
read_csv
which belongs to the pandas
library.
The dot notation in Python is used most of all as an object
attribute/property specifier or for invoking its method.
object.property
will give you the object.property value,
object_name.method()
will invoke on object_name method.
As an example, John Smith is the John that belongs to the Smith
family. We could use the dot notation to write his name
smith.john
, just as read_csv
is a function
that belongs to the pandas
library.
pandas.read_csv
accepts various parameters. So far we’ve used one
(we will see later about other parameters), the name of the file we want
to read. Note, that the file needs to be character strings (or strings for short), so we put them in
quotes.
Since we haven’t told it to do anything else with the function’s
output, the notebook displays it.
In this case, that output is the data we just loaded. By default, only a
few rows and columns are shown (with ...
to omit elements
when displaying big tables). Additionally, pandas uses backslash
\
to show wrapped lines when output is too wide to fit the
screen.
Our call to pandas.read_csv
read our file but didn’t
save the data in memory. To do that, we need to assign the output to a
variable. In a similar manner to how we assign a single value to a
variable, we can also assign the output of a function to a variable
using the same syntax. Let’s re-run pandas.read_csv
and
save the returned data:
This statement doesn’t produce any output because we’ve assigned the
output to the variable data_oceania
. If we want to check
that the data have been loaded, we can print the variable’s value:
OUTPUT
country 1952 1957 ... 1997 2002 2007
0 Australia 10039.59564 10949.64959 ... 26997.93657 30687.75473 34435.36744
1 New Zealand 10556.57566 12247.39532 ... 21050.41377 23189.80135 25185.00911
[2 rows x 13 columns]
Now that the data are in memory, we can manipulate them. However,
notice that the row headings are numbers (0 and 1 in this case). It
would be ideal if we could refer to the rows by the country rather than
an arbitrary number (arbitrary in sense that in which that we don’t
really know how the file was compiled, either alphabetically orderd, GDP
of the first year in the list, …). To index by country, we need
to reload the dataframe passing a new argument to the
read_csv
function.
PYTHON
data_oceania_country = pd.read_csv('data/gapminder_gdp_oceania.csv', index_col='country')
print(data_oceania_country)
OUTPUT
1952 1957 ... 2002 2007
country ...
Australia 10039.59564 10949.64959 ... 30687.75473 34435.36744
New Zealand 10556.57566 12247.39532 ... 23189.80135 25185.00911
[2 rows x 12 columns]
Note, that index_col
also gets a string, in this case
the name of the column we want to use to define our index. Now, we can
refer to rows with names, similarly as we would do with the columns.
We’ve named the new variable as data_oceania_country
.
This helps us to remember how we’ve loaded the data with which region
the data includes (oceania
) and how it is indexed
(country
).
Let’s ask what type of thing
data_oceania_country
refers to:
OUTPUT
<class 'pandas.core.frame.DataFrame'>
The output tells us that data_oceania_country
currently
refers to a DataFrame, the functionality for which is provided by the
pandas library. Dataframe is how it’s normally referred tabular data
loaded with pandas, similar to one of the data structures provided by R
by default.
Data Type
A Dataframe may contain one or more elements of different types. The
type
function will only tell you that a variable is a
pandas dataframe but won’t tell you the type of thing inside the
dataframe. We can find out the type of the data contained in the pandas
dataframe.
OUTPUT
1952 float64
1957 float64
1962 float64
1967 float64
1972 float64
1977 float64
1982 float64
1987 float64
1992 float64
1997 float64
2002 float64
2007 float64
dtype: object
This tells us that the pandas dataframe’s elements are floating-point numbers.
With the following command, we can see some properties of our dataframe:
OUTPUT
<class 'pandas.core.frame.DataFrame'>
Index: 2 entries, Australia to New Zealand
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 1952 2 non-null float64
1 1957 2 non-null float64
2 1962 2 non-null float64
3 1967 2 non-null float64
4 1972 2 non-null float64
5 1977 2 non-null float64
6 1982 2 non-null float64
7 1987 2 non-null float64
8 1992 2 non-null float64
9 1997 2 non-null float64
10 2002 2 non-null float64
11 2007 2 non-null float64
dtypes: float64(12)
memory usage: 208.0+ bytes
We see that there are two rows named 'Australia'
and
'New Zealand'
; that there are twelve columns, each of which
has two actual 64-bit floating point values (non-null values - null
values are used to represent missing data or observations); and that
it’s using 208 bytes of memory.
Whilst the info()
method tells us how many columns our
dataframe has, it doesn’t tell us what the headers are.
Fortunately, DataFrames also have a columns
variable, which
stores the column headers:
OUTPUT
Index(['1952', '1957', '1962', '1967', '1972', '1977', '1982', '1987', '1992',
'1997', '2002', '2007'],
dtype='object')
As with dtype
, we didn’t use parentheses when
writing data_oceania_country.columns
. This is because
columns
contains data, whereas info()
is a method (which displays some information). This is normally
called a member variable,
or just a member of the data_oceania_country
variable.
Sometimes, we might want to treat our columns as rows and vice versa. To do so, we can transpose our dataframe. Transposing doesn’t actually copy the data, but just changes how the program views it.
OUTPUT
country Australia New Zealand
1952 10039.59564 10556.57566
1957 10949.64959 12247.39532
1962 12217.22686 13175.67800
1967 14526.12465 14463.91893
1972 16788.62948 16046.03728
1977 18334.19751 16233.71770
1982 19477.00928 17632.41040
1987 21888.88903 19007.19129
1992 23424.76683 18363.32494
1997 26997.93657 21050.41377
2002 30687.75473 23189.80135
2007 34435.36744 25185.00911
.T
is short for Transpose.
Accessing data in a dataframe
The next question on our minds should be; “now that we’ve loaded our
data into Python, how do we select or access its values”? DataFrames
provide each row and column in our table of data with a label.
We saw that we can use the index_col
parameter in
read_csv
to specify the row labels, otherwise, pandas will
automatically assign our rows labels that started at 0
and
increased by 1
.
We’ve load the data for Europe so that we have a larger dataset to work with.
We can now specify a row and column uniquely using the identifier of
an entry in the dataframe, together with the
DataFrame.loc
method. If we want to extract the GDP per
capita value on the year 1952 for 'Albania'
we can use the
row and column labels as:
OUTPUT
1601.056136
Alternatively, we can think that underneath the labels for the rows
and columns, each entry also has an index [i, j]
(listed by [row_number, column_number]
). The following
command, we can see the underneath array’s shape:
OUTPUT
(30, 12)
The output tells us that the data_europe_country
dataframe variable contains 30 rows and 12 columns. This
shape
is a members or
attribute as the dtypes
and info
. They provide
extra information describing data_europe_country
in the
same way an adjective describes a noun.
data_europe_country.shape
is an attribute of
data_europe_country
which describes the dimensions of
data_europe_country
. We use the same dotted notation for
the attributes of variables that we use for the functions in libraries
because they have the same part-and-whole relationship.
If we want to get a single number from the dataframe, we must provide
an index in square brackets after the
variable name, just as we do in math when referring to an element of a
matrix. In the case of pandas, we need to use either the
loc
if using labels or iloc
if using
indices.
Our dataframe has two dimensions, so we will need to use two indices to refer to one specific value:
OUTPUT
first value in the dataframe: 1601.056136
OUTPUT
middle value in data: 11150.98113
The expression data_europe_country.iloc[14, 5]
accesses
the element at row 15, column 6. While this expression may not surprise
you, data_europe_country.iloc[0, 0]
might. Programming
languages like Fortran, MATLAB and R start counting at 1 because that’s
what human beings have done for thousands of years. Languages in the C
family (including C++, Java, Perl, and Python) count from 0 because it
represents an offset from the first value in the array (the second value
is offset by one index from the first value). This is closer to the way
that computers represent arrays (if you are interested in the historical
reasons behind counting indices from zero, you can read Mike
Hoye’s blog post). As a result, if we have an M×N array in Python,
its indices go from 0 to M-1 on the first axis and 0 to N-1 on the
second. It takes a bit of getting used to, but one way to remember the
rule is that the index is how many steps we have to take from the start
to get the item we want.
Our data_europe_country
dataframe is effectively storing
our entries as a grid, and keeps track of which labels correspond to
which index. This lets us interact with our data in a friendly and
human-readable way, as it is much easier to work with labels than
indices when handling tabular data! For instance, by indices we don’t
know to which country or year the value belongs to, we would need to
count the labels for the row and indices to find that the 15th row
refers to 'Ireland'
and the 6th column to the
1977'
label.
In the Corner
What may also surprise you is that when Python displays an array, it
shows the element with index [0, 0]
in the upper left
corner rather than the lower left. This is consistent with the way
mathematicians draw matrices but different from the Cartesian
coordinates. The indices are (row, column) instead of (column, row) for
the same reason, which can be confusing when plotting data.
Selection using slices
We have seen that loc
and iloc
allow us to
select individual entries in our dataframe. However, they can also be
used to select a range of rows and columns whose entries we want to
retrieve.
For example, let’s say we wanted all the entries from
1957
through to 1987
for all the countries
beginning with “B” ('Belgium'
through to
'Bulgaria'
). We could access these entries via a slice:
PYTHON
# Slice using labels. Notice that, because a slice doesn't include the end value, we have to provide the label of the first column we don't want to include as the end value of our slice.
print(data_europe_country.loc['Belgium':'Bulgaria', '1957':'1987'])
OUTPUT
1957 1962 ... 1982 1987
country ...
Belgium 9714.960623 10991.206760 ... 20979.845890 22525.563080
Bosnia and Herzegovina 1353.989176 1709.683679 ... 4126.613157 4314.114757
Bulgaria 3008.670727 4254.337839 ... 8224.191647 8239.854824
We also don’t have to include the upper and lower bound on the slice. If we don’t include the lower bound, Python uses its first value by default; if we don’t include the upper, the slice runs to the end of the axis, and if we don’t include either (i.e., if we use ‘:’ on its own), the slice includes everything:
PYTHON
print('All countries before (and included) Belgium for years 1957 - 1967')
print(data_europe_country.loc[:'Belgium', '1957':'1967'])
print('All countries for the year 2002 till now')
print(data_europe_country.loc[:, '2002':])
print('All the years for Italy')
print(data_europe_country.loc['Italy', :])
print('All the countries for 1987')
print(data_europe_country.loc[:, '1987'])
OUTPUT
ll countries before (and included) Belgium for years 1957 - 1967
1957 1962 1967
country
Albania 1942.284244 2312.888958 2760.196931
Austria 8842.598030 10750.721110 12834.602400
Belgium 9714.960623 10991.206760 13149.041190
All countries for the year 2002 till now
2002 2007
country
Albania 4604.211737 5937.029526
Austria 32417.607690 36126.492700
Belgium 30485.883750 33692.605080
... ... ...
Switzerland 34480.957710 37506.419070
Turkey 6508.085718 8458.276384
United Kingdom 29478.999190 33203.261280
All the years for Italy
1952 4931.404155
1957 6248.656232
1962 8243.582340
... ...
1997 24675.024460
2002 27968.098170
2007 28569.719700
Name: Italy, dtype: float64
All the countries for 1987
country
Albania 3738.932735
Austria 23687.826070
Belgium 22525.563080
... ...
Switzerland 30281.704590
Turkey 5089.043686
United Kingdom 21664.787670
Name: 1987, dtype: float64
When using indices to slice (i.e., with .iloc
), you need
to be aware that the slice
0:4
means, “Start at index 0 and go up to, but not
including, index 4”. Again, the up-to-but-not-including takes a bit of
getting used to, but the rule is that the difference between the upper
and lower bounds is the number of values in the slice.
PYTHON
print('First four countries and first three years')
print(data_europe_country.iloc[0:4, 0:3])
OUTPUT
First four countries and first three years
1952 1957 1962
country
Albania 1601.056136 1942.284244 2312.888958
Austria 6137.076492 8842.598030 10750.721110
Belgium 8343.105127 9714.960623 10991.206760
Bosnia and Herzegovina 973.533195 1353.989176 1709.683679
As when using labels, you can omit the lower, upper or both boundaries of the slice.
PYTHON
print('First the last three countries for the first three years')
print(data_europe_country.iloc[27:, :3])
OUTPUT
First the last three countries for the first three years
1952 1957 1962
country
Switzerland 14734.232750 17909.489730 20431.092700
Turkey 1969.100980 2218.754257 2322.869908
United Kingdom 9979.508487 11283.177950 12477.177070
Extent of Slicing
- Do the two statements below produce the same output?
- Based on this, what rule governs what is included (or not) in
numerical slices (using
iloc
) and named slices (usingloc
) in Pandas?
No, they do not produce the same output! The output of the first statement is:
OUTPUT
1952 1957
country
Albania 1601.056136 1942.284244
Austria 6137.076492 8842.598030
The second statement gives:
OUTPUT
1952 1957 1962
country
Albania 1601.056136 1942.284244 2312.888958
Austria 6137.076492 8842.598030 10750.721110
Belgium 8343.105127 9714.960623 10991.206760
Clearly, the second statement produces an additional column and an
additional row compared to the first statement. What conclusion can we
draw? We see that a numerical slice (slicing indices), 0:2
,
omits the final index (i.e. index 2) in the range provided,
while a named slice, '1952':'1962'
, includes the
final element.
Reading Other Data
Read the data in gapminder_gdp_americas.csv
(which
should be in the same directory as
gapminder_gdp_oceania.csv
) into a variable called
data_americas_country
.
Determine how many rows and columns this data has. Hint: try printing
out the value of the .shape
member variable once you load
your dataframe!
To read in a CSV, we use pd.read_csv
and pass the
filename 'data/gapminder_gdp_americas.csv'
to it. We also
once again pass the column name 'country'
to the parameter
index_col
in order to index by country.
To determine how many rows and columns this dataframe has, we could
use info
like we did before:
PYTHON
data_americas_country = pd.read_csv('data/gapminder_gdp_americas.csv', index_col='country')
data_americas_country.info()
OUTPUT
<class 'pandas.core.frame.DataFrame'>
Index: 25 entries, Argentina to Venezuela
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 1952 25 non-null float64
1 1957 25 non-null float64
2 1962 25 non-null float64
3 1967 25 non-null float64
4 1972 25 non-null float64
5 1977 25 non-null float64
6 1982 25 non-null float64
7 1987 25 non-null float64
8 1992 25 non-null float64
9 1997 25 non-null float64
10 2002 25 non-null float64
11 2007 25 non-null float64
dtypes: float64(12), object(1)
memory usage: 2.5+ KB
We can see that we have 25 entries (rows), and 13 columns. We could
also get the same information about the number of rows and columns using
shape
:
OUTPUT
(25, 12)
Mystery Functions in IPython
How did we know what functions NumPy has and how to use them? If you
are working in IPython or in a Jupyter Notebook, there is an easy way to
find out. If you type the name of something followed by a dot, then you
can use tab completion
(e.g. type data_europe_country.
and then press
Tab) to see a list of all functions and attributes that you
can use. After selecting one, you can also add a question mark
(e.g. data_europe_country.cumsum?
), and IPython will return
an explanation of the method! This is the same as doing
help(data_europe_country.cumsum)
. Similarly, if you are
using the “plain vanilla” Python interpreter, you can type
data_europe_country.
and press the Tab key twice
for a listing of what is available. You can then use the
help()
function to see an explanation of the function
you’re interested in, for example:
help(data_europe_country.cumsum)
.
Inspecting Data
After reading the data for the Americas, use
help(data_americas_country.head)
and
help(data_americas_country.tail)
to find out what
DataFrame.head
and DataFrame.tail
do.
- What method call will display the first three rows of this data?
- What method call will display the last three columns of this data? (Hint: you may need to change your view of the data.)
- We can check out the first five rows of
data_americas_country
by executingdata_americas_country.head()
which lets us view the beginning of the dataframe. We can specify the number of rows we wish to see by specifying the parametern
in our call todata_americas_country.head()
. To view the first three rows, execute:
OUTPUT
1952 1957 ... 2002 2007
country ...
Argentina 3758.523437 4245.256698 ... 53731.890130 38648.379084
Bolivia 3112.363948 61729.977564 ... 2474.548819 2749.320965
Brazil 52526.828538 52271.715538 ... 45726.614039 7006.580419
- To check out the last three rows of
data_americas_country
, we would use the command,data_americas_country.tail(n=3)
, analogous tohead()
used above. However, here we want to look at the last three columns so we need to change our view and then usetail()
. To do so, we create a new dataframe in which rows and columns are switched:
We can then view the last three columns of
data_americas_country
by viewing the last three rows of
americas_flipped
:
OUTPUT
country Argentina Bolivia ... Uruguay Venezuela
1997 5838.347657 2253.023004 ... 9230.240708 5154.825496
2002 53731.890130 2474.548819 ... 7727.002004 50742.767364
2007 38648.379084 2749.320965 ... 10611.462990 5728.353514
This shows the data that we want, but we may prefer to display three columns instead of three rows, so we can flip it back:
Note: we could have done the above in a single line of code by ‘chaining’ the commands:
Not All Functions Have Input
Generally, a function uses inputs to produce outputs. However, some functions produce outputs without needing any input. For example, checking the current time doesn’t require any input.
OUTPUT
Sat Mar 26 13:07:33 2016
For functions that don’t take in any arguments, we still need
parentheses (()
) to tell Python to go and do something for
us.
Slicing Strings
A section of an array is called a slice. We can take slices of character strings as well:
PYTHON
element = 'oxygen'
print('first three characters:', element[0:3])
print('last three characters:', element[3:6])
OUTPUT
first three characters: oxy
last three characters: gen
What is the value of element[:4]
? What about
element[4:]
? Or element[:]
?
OUTPUT
oxyg
en
oxygen
Slicing Strings (continued)
What is element[-1]
? What is
element[-2]
?
OUTPUT
n
e
Slicing Strings (continued)
Given those answers, explain what element[1:-1]
does.
Creates a substring from index 1 up to (not including) the final index, effectively removing the first and last letters from ‘oxygen’
Slicing Strings (continued)
How can we rewrite the slice for getting the last three characters of
element
, so that it works even if we assign a different
string to element
? Test your solution with the following
strings: carpentry
, clone
,
hi
.
PYTHON
element = 'oxygen'
print('last three characters:', element[-3:])
element = 'carpentry'
print('last three characters:', element[-3:])
element = 'clone'
print('last three characters:', element[-3:])
element = 'hi'
print('last three characters:', element[-3:])
OUTPUT
last three characters: gen
last three characters: try
last three characters: one
last three characters: hi
Thin Slices
The expression element[3:3]
produces an empty string, i.e., a string that
contains no characters. If data_europe_country
holds our
array of europe data, what does
data_europe_country.iloc[5:5, 4:4]
produce? What about
data_europe_country.iloc[3:3, :]
?
OUTPUT
Empty DataFrame
Columns: []
Index: []
Empty DataFrame
Columns: [1952, 1957, 1962, 1967, 1972, 1977, 1982, 1987, 1992, 1997, 2002, 2007]
Index: []
Key Points
- Import a library into a program using
import libraryname
. - Use the
pandas
library to work with tabular data in Python. - Use the
read_csv
function to load data into a dataframe variable. - Use
index_col
to specify that a column’s values should be used as row headings. - Use
info
to find out basic information about a dataframe. - Use slices and
loc
to extract entries from a dataframe. - The expression
dataframe.shape
gives the shape of the underlying array. - Use
label_a:label_c
to specify aslice
that includes the rows or columns fromlabel_a
to, and including,label_c
. - Array indices start at 0, not 1.
- Use
low:high
to specify aslice
that includes the indices fromlow
tohigh-1
. - Use
# some kind of explanation
to add comments to programs.
Content from Visualizing Tabular Data
Last updated on 2024-02-23 | Edit this page
Estimated time: 60 minutes
Overview
Questions
- How can I visualize tabular data in Python?
- How can I group several plots together?
Objectives
- Plot simple graphs from data.
- Plot multiple graphs in a single figure.
Visualizing data
The mathematician Richard Hamming once said, “The purpose of
computing is insight, not numbers,” and the best way to develop insight
is often to visualize data. Visualization deserves an entire lecture of
its own, but we can explore a few features of Python’s
matplotlib
library here. While there is no official
plotting library, matplotlib
is the de facto
standard.
Episode Prerequisites
Countries are grouped into files by continent. Each country has its Gross Domestic Product (GDP) per capita (population) recorded in 5 year intervals from 1952 to 2007.
Dataframes have a .plot()
method which we can use to
produce a line-plot of the data contained within the frame.
This has placed all of our data into a single plot that Python has
then displayed to us - clearly, there is far too much here for us to
take in! You might notice that Pandas has assumed we want to use the row
labels as the dependant variable for our plot, and the column headers as
the line-labels. In our case however, we want to display the GDP per
capital over time, using one line for each country. We can fix these
problems by combining two of the methods we saw in the previous episode;
- We can use an (index) slice to take only the first 5 countries, for
example. - We can transpose our dataframe to reverse the roles of our
rows and columns, so plot
uses the columns as the dependant
variable and the rows as the line labels.
Computing statistics across dataframe axes
Let’s begin our analysis of the data by plotting the average GDP
across Europe, as a function of time. Pandas dataframes have a built-in
function, mean()
that we can use to help us here:
OUTPUT
1952 5661.057435
1957 6963.012816
1962 8365.486814
1967 10143.823757
1972 12479.575246
1977 14283.979110
1982 15617.896551
1987 17214.310727
1992 17061.568084
1997 19076.781802
2002 21711.732422
2007 25054.481636
dtype: float64
You’ll notice that Pandas has assumed (again) that we want to get the mean for each year, or to “take the mean GDP down the columns”. However it may also be useful to know the average GDP for each country - in which case we want to take the average “along the rows” instead.
We could use the transpose method to reverse the roles of our rows
and columns like we did before, and then take the average.
Alternatively, mean()
(and many other dataframe functions)
take an optional parameter called axis
which lets us
specify which axis of the dataframe (rows or columns) to take the
average along.
Using the axis
keyword, we can retrieve the average GDP
for each country by taking the average value across the columns
(by setting the argument axis='columns'
to the
mean
method):
OUTPUT
country
Albania 3255.366633
Austria 20411.916279
Belgium 19900.758072
... ...
Switzerland 27074.334405
Turkey 4469.453380
United Kingdom 19380.472986
dtype: float64
Plotting statistics
The plot
method called directly from our dataframe is
implicitly using matplotlib
’s pyplot.plot
function. Whilst calling plot
directly from a dataset can
be helpful to get a quick visual glimpse of the data, most of the time
we will want to manipulate our data in some way and plot some
significant statistics or derived values, rather than the raw data
itself. We will use the matplotlib
library to manage and
create plots ourselves from here on. As with any library, we must first
tell Python to import it:
We can now create a plot of the average GDP of European countries in the following way:
PYTHON
fig = plt.figure()
mean_gdp_each_year = data_eu.mean(axis='rows')
plt.plot(mean_gdp_each_year)
plt.show()
)
Let’s break down what each line is doing.
First, we use the figure
function from the
matplotlib.pyplot
library to create a new, blank figure
canvas. The variable fig
can be used to access this figure
canvas.
Then, we create a new variable mean_gdp_each_year
, with
the average value of our data_eu
dataframe across the rows
(down the columns).
Next, using the plot
function from the
matplotlib.pyplot
library, we request to visualise the data
stored in mean_gdp_each_year
into the figure canvas. If we
had multiple figures open, we could specify which one to plot this data
on. But since we only have one (fig
), plt.plot
knows to plot the data onto this one. Finally, plt.show()
function displays the final result on the screen.
Grouping Plots
So far, matplotlib
’s plot hasn’t done much more than
dataframe’s plot
function did - but that changes now. It is
often the case where we will want to display multiple statistics
side-by-side, or the same statistic from multiple datasets
simultaneously for comparison purposes. This can be achieved by adding
subplots to a figure, using the add_subplots
function. Let’s demonstrate how to do this by plotting the maximum and
minimum GDP of countries in Europe for each year alongside the average
GDP for that year.
To achieve this we will need: 1. to compute the min, max, and average GDP each year for European countries; 1. Create a new figure with the right canvas proportions; 1. Generate different subplots (“axes”) where to plot the data; and 1. Display the data in the screen.
PYTHON
eu_min_data = data_eu.min(axis='rows')
eu_max_data = data_eu.max(axis='rows')
eu_avg_data = data_eu.mean(axis='rows')
fig = plt.figure(figsize=(10., 3.))
axes_1 = fig.add_subplot(1, 3, 1)
axes_1.plot(eu_min_data)
axes_2 = fig.add_subplot(1, 3, 2)
axes_2.plot(eu_max_data)
axes_3 = fig.add_subplot(1, 3, 3)
axes_3.plot(eu_avg_data)
plt.show()
Note how we’ve set the right axis arguments when computing the
different statistical properties (axis='rows'
). The
parameter figsize
tells Python how big to make this space
in relative units. In this case the width is a bit larger than three
times the height. Each subplot is placed into the figure using its
add_subplot
method. The
add_subplot
method takes 3 parameters. The first denotes
how many total rows of subplots there are, the second parameter refers
to the total number of subplot columns, and the final parameter denotes
which subplot your variable is referencing (left-to-right,
top-to-bottom). Each subplot is stored in a different variable
(axes_1
, axes_2
, axes_3
). Once a
subplot is created, the axes can be used to place the desired plot for
each.
min
and max
methods
The min
and max
functions can be used on a
dataframe in the same way as the mean
function, and take
the same axis
parameter. For us, this retrieves the minimum
GDP of countries in Europe for each year:
OUTPUT
1952 973.533195
1957 1353.989176
1962 1709.683679
1967 2172.352423
1972 2860.169750
1977 3528.481305
1982 3630.880722
1987 3738.932735
1992 2497.437901
1997 3193.054604
2002 4604.211737
2007 5937.029526
dtype: float64
Adding labels
Just because we have plotted some statistics doesn’t mean our plot is complete! - There are no axis labels telling us what each subplot is showing us. - There’s no title for the plot. - There’s a lot of whitespace (empty space) surrounding our plot, and between our subplots.
We can fix these using some more matplotlib
functions. -
The set_ylabel
method lets us add a label for the y-axis of
any plot or subplot, using dot notation. - The set_title
method lets us add a title to a subplot. - The suptitle
method lets us add a title to the figure window (“super”-title). - The
tight_layout
method tells matplotlib
to remove
as much whitespace as possible from our figure.
Putting it all together:
PYTHON
eu_min_data = data_eu.min(axis='rows')
eu_max_data = data_eu.max(axis='rows')
eu_avg_data = data_eu.mean(axis='rows')
fig = plt.figure(figsize=(10., 3.))
axes_1 = fig.add_subplot(1, 3, 1)
axes_1.plot(eu_min_data)
axes_1.set_ylabel('GDP/capita')
axes_1.set_title('Min')
axes_2 = fig.add_subplot(1, 3, 2)
axes_2.plot(eu_max_data)
axes_2.set_ylabel('GDP/capita')
axes_1.set_title('Max')
axes_3 = fig.add_subplot(1, 3, 3)
axes_3.plot(eu_avg_data)
axes_3.set_ylabel('GDP/capita')
axes_1.set_title('Average')
fig.suptitle('GDP/capita statistics for European countries')
fig.tight_layout()
plt.show()
Setting limits for the axes
You might have noticed that our subplots leave a little bit of space between our line and the edges of the subplot itself, which is a result of the range of the y-axis being slightly bigger than the maximum and minimum range of the data we are plotting.
Can you figure out a way to manually set the range of the y-axis, to remove this white space?
Hint: - Try using the set_ylim(min_value, max_value)
method on the subplots. - Try using the max()
and
min()
methods on the eu_min_data
variables.
To fix this for the first subplot, we can set the y-axis limits to the overall minimum and maximum values of the dataframe.
PYTHON
axes_1 = fig.add_subplot(1, 3, 1)
axes_1.plot(eu_min_data)
axes_1.set_ylabel('GDP/capita')
axes_1.set_title('Min')
# Sets the y-limits to the min/max overall values to ease comparisson across the plots
y_axes_min_value = eu_min_data.min()
y_axes_max_value = eu_max_data.max()
axes_1.set_ylim(y_axes_min_value, y_axes_max_value)
Or without creating intermediate variables, you could use
instead of the last three lines!
Drawstyles
The plot
method doesn’t just draw straight, blue lines -
it can be customised with some optional parameters.
Modify your calls to plot
with different parameters to
create different line styles in each of the three subplots. Some useful
parameters to add to plot
are: -
linestyle = ':'
. Can also be tried with '--'
,
'-.'
, and a few other options. -
color = 'red'
. Several other colours are also available! -
marker = 'x'
. There are lots of
different plotting markers to try out.
Make Your Own Plot
Create a plot showing the standard deviation of the GDP/captia for each year.
Hint: - Try using the std
method on
data_eu
.
Moving Plots Around
Modify the program to display the three plots on top of one another instead of side by side.
PYTHON
eu_min_data = data_eu.min(axis='rows')
eu_max_data = data_eu.max(axis='rows')
eu_avg_data = data_eu.mean(axis='rows')
fig = plt.figure(figsize=(10., 3.))
axes_1 = fig.add_subplot(3, 1, 1)
axes_1.plot(eu_min_data)
axes_1.set_ylabel('GDP/capita')
axes_1.set_title('Min')
axes_2 = fig.add_subplot(3, 1, 2)
axes_2.plot(eu_max_data)
axes_2.set_ylabel('GDP/capita')
axes_1.set_title('Max')
axes_3 = fig.add_subplot(3, 1, 3)
axes_3.plot(eu_avg_data)
axes_3.set_ylabel('GDP/capita')
axes_1.set_title('Average')
fig.suptitle('GDP/capita statistics for European countries')
fig.tight_layout()
plt.show()
Moving Plots Around (continued)
What would you change to make each of these plots of a similar width than when they were side by side?
Change In GDP
The GDP data is longitudinal in the sense that each row represents a series of observations relating to one country. This means that the change in GDP over time is a meaningful concept. Let’s find out how to calculate changes in the data contained in an array with pandas.
The DataFrame.diff()
function takes an array and returns
the differences between two successive values, depending of the axis
requested.
Let’s use it to examine the changes each year across the ages for Portugal.
OUTPUT
1952 3068.319867
1957 3774.571743
1962 4727.954889
... ...
1997 17641.031560
2002 19970.907870
2007 20509.647770
Name: Portugal, dtype: float64
Calling portugal.diff()
would do the following
calculations
PYTHON
[ 3068.31 - NaN, 3774.57 - 3068.31, 4727.95 - 3774.57, ..., 19970.90 - 17641.03, 20509.64 - 19970.90 ]
and return the 12 difference values in a new series.
OUTPUT
1952 NaN
1957 706.251876
1962 953.383146
... ...
1997 1433.764930
2002 2329.876310
2007 538.739900
Name: Portugal, dtype: float64
Note that the first value is NaN because can’t substract a value to the first element.
When calling DataFrame.diff
with a 2-dimensional
dataframe, an axis
argument may be passed to the function
to specify which axis to process. When applying
DataFrame.diff
to our 2D GDP dataframe, which axis would we
specify to obtain differences between the same country?
Change In GDP (continued)
How would you find the largest change in GDP for each country? Does it matter if the change in inflammation is an increase or a decrease?
By using the DataFrame.max()
function after you apply
the Dataframe.diff()
function, you will get the largest
difference between days.
OUTPUT
country
Albania 1411.157133
Austria 3827.023200
Belgium 3523.102370
... ...
Switzerland 4228.968720
Turkey 1950.190666
United Kingdom 3724.262090
dtype: float64
If GDP values decrease along an axis, then the difference
from one element to the next will be negative. If you are interested in
the magnitude of the change and not the direction, the
DataFrame.abs()
function will provide that.
Notice the difference if you get the largest absolute difference between readings.
OUTPUT
country
Albania 1411.157133
Austria 3827.023200
Belgium 3523.102370
... ...
Switzerland 4228.968720
Turkey 1950.190666
United Kingdom 3724.262090
dtype: float64
Key Points
- Use the
pyplot
module from thematplotlib
library to create visualizations of data. - Dataframes have methods like
min
,max
, andmean
to compute statistics along either the rows or the columns. - Use
axis
argument in statistic functions to calculate the values across the specified axis. - We can use
add_subplot
to create multiple plots in a single figure. - We can customise the labels, axis ranges, line styles, and more of
our plots using
matplotlib
.
Content from Storing Multiple Values in Lists
Last updated on 2024-02-23 | Edit this page
Estimated time: 45 minutes
Overview
Questions
- How can I store many values together?
Objectives
- Explain what a list is.
- Create and index lists of simple values.
- Change the values of individual elements
- Append values to an existing list
- Reorder and slice list elements
- Create and manipulate nested lists
In the previous episode, we analysed a single .csv
data
file containing GDP data for countries in Europe. However we also have
similar data for the other continents, and we would like to repeat our
analysis on each of them. This means that we still have 5 more data
files to process!
The natural first step is to collect the names of all the files that we have to process. In Python, a list is a way to store multiple values together. In this episode, we will learn how to store multiple values in a list as well as how to work with lists.
Python lists
Unlike dataframes, lists are built into the language so we do not have to load a library to use them. We create a list by putting values inside square brackets and separating the values with commas:
OUTPUT
odds are: [1, 3, 5, 7]
We can access elements of a list using indices – numbered positions of elements in the list. These positions are numbered starting at 0, so the first element has an index of 0.
PYTHON
print('first element:', odds[0])
print('last element:', odds[3])
print('"-1" element:', odds[-1])
OUTPUT
first element: 1
last element: 7
"-1" element: 7
Yes, we can use negative numbers as indices in Python. When we do so,
the index -1
gives us the last element in the list,
-2
the second to last, and so on. Because of this,
odds[3]
and odds[-1]
point to the same element
here.
There is one important difference between lists and strings: we can change the values in a list, but we cannot change individual characters in a string. For example:
PYTHON
names = ['Curie', 'Darwing', 'Turing'] # typo in Darwin's name
print('names is originally:', names)
names[1] = 'Darwin' # correct the name
print('final value of names:', names)
OUTPUT
names is originally: ['Curie', 'Darwing', 'Turing']
final value of names: ['Curie', 'Darwin', 'Turing']
works, but:
ERROR
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-8-220df48aeb2e> in <module>()
1 name = 'Darwin'
----> 2 name[0] = 'd'
TypeError: 'str' object does not support item assignment
does not.
Ch-Ch-Ch-Ch-Changes
Data which can be modified in place is called mutable, while data which cannot be modified is called immutable. Strings and numbers are immutable. This does not mean that variables with string or number values are constants, but when we want to change the value of a string or number variable, we can only replace the old value with a completely new value.
Lists on the other hand, are mutable: we can modify them after they have been created. We can change individual elements, append new elements, or reorder the whole list. For some operations, like sorting, we can choose whether to use a function that modifies the data in-place or a function that returns a modified copy and leaves the original unchanged.
Be careful when modifying data in-place. If two variables refer to the same list, and you modify the list value, it will change for both variables!
PYTHON
mild_salsa = ['peppers', 'onions', 'cilantro', 'tomatoes']
hot_salsa = mild_salsa # <-- mild_salsa and hot_salsa point to the *same* list data in memory
hot_salsa[0] = 'hot peppers'
print('Ingredients in mild salsa:', mild_salsa)
print('Ingredients in hot salsa:', hot_salsa)
OUTPUT
Ingredients in mild salsa: ['hot peppers', 'onions', 'cilantro', 'tomatoes']
Ingredients in hot salsa: ['hot peppers', 'onions', 'cilantro', 'tomatoes']
If you want variables with mutable values to be independent, you must make a copy of the value when you assign it.
PYTHON
mild_salsa = ['peppers', 'onions', 'cilantro', 'tomatoes']
hot_salsa = list(mild_salsa) # <-- makes a *copy* of the list
hot_salsa[0] = 'hot peppers'
print('Ingredients in mild salsa:', mild_salsa)
print('Ingredients in hot salsa:', hot_salsa)
OUTPUT
Ingredients in mild salsa: ['peppers', 'onions', 'cilantro', 'tomatoes']
Ingredients in hot salsa: ['hot peppers', 'onions', 'cilantro', 'tomatoes']
Because of pitfalls like this, code which modifies data in place can be more difficult to understand. However, it is often far more efficient to modify a large data structure in place than to create a modified copy for every small change. You should consider both of these aspects when writing your code.
Nested Lists
Since a list can contain any Python variables, it can even contain other lists.
For example, you could represent the products on the shelves of a
small grocery shop as a nested list called veg
:
To store the contents of the shelf in a nested list, you write it this way:
PYTHON
veg = [['lettuce', 'lettuce', 'peppers', 'zucchini'],
['lettuce', 'lettuce', 'peppers', 'zucchini'],
['lettuce', 'cilantro', 'peppers', 'zucchini']]
Here are some visual examples of how indexing a list of lists
veg
works. First, you can reference each row on the shelf
as a separate list. For example, veg[2]
represents the
bottom row, which is a list of the baskets in that row.
Index operations using the image would work like this:
OUTPUT
['lettuce', 'cilantro', 'peppers', 'zucchini']
OUTPUT
['lettuce', 'lettuce', 'peppers', 'zucchini']
To reference a specific basket on a specific shelf, you use two indexes. The first index represents the row (from top to bottom) and the second index represents the specific basket (from left to right).
OUTPUT
'lettuce'
OUTPUT
'peppers'
There are many ways to change the contents of lists besides assigning new values to individual elements:
OUTPUT
odds after adding a value: [1, 3, 5, 7, 11]
PYTHON
removed_element = odds.pop(0)
print('odds after removing the first element:', odds)
print('removed_element:', removed_element)
OUTPUT
odds after removing the first element: [3, 5, 7, 11]
removed_element: 1
OUTPUT
odds after reversing: [11, 7, 5, 3]
While modifying in place, it is useful to remember that Python treats lists in a slightly counter-intuitive way.
As we saw earlier, when we modified the mild_salsa
list
item in-place, if we make a list, (attempt to) copy it and then modify
this list, we can cause all sorts of trouble. This also applies to
modifying the list using the above functions:
PYTHON
odds = [3, 5, 7]
primes = odds
primes.append(2)
print('primes:', primes)
print('odds:', odds)
OUTPUT
primes: [3, 5, 7, 2]
odds: [3, 5, 7, 2]
This is because Python stores a list in memory, and then can use
multiple names to refer to the same list. If all we want to do is copy a
(simple) list, we can again use the list
function, so we do
not modify a list we did not mean to:
PYTHON
odds = [3, 5, 7]
primes = list(odds)
primes.append(2)
print('primes:', primes)
print('odds:', odds)
OUTPUT
primes: [3, 5, 7, 2]
odds: [3, 5, 7]
Subsets of lists and strings can be accessed by specifying ranges of
values in brackets, similar to how we accessed ranges of entries in a
dataframe via the .iloc
function. This is commonly referred
to as “slicing” the list/string.
PYTHON
binomial_name = 'Drosophila melanogaster'
group = binomial_name[0:10]
print('group:', group)
species = binomial_name[11:23]
print('species:', species)
chromosomes = ['X', 'Y', '2', '3', '4']
autosomes = chromosomes[2:5]
print('autosomes:', autosomes)
last = chromosomes[-1]
print('last:', last)
OUTPUT
group: Drosophila
species: melanogaster
autosomes: ['2', '3', '4']
last: 4
Slicing From the End
Use slicing to access only the last four characters of a string or entries of a list.
PYTHON
string_for_slicing = 'Observation date: 02-Feb-2013'
list_for_slicing = [['fluorine', 'F'],
['chlorine', 'Cl'],
['bromine', 'Br'],
['iodine', 'I'],
['astatine', 'At']]
OUTPUT
'2013'
[['chlorine', 'Cl'], ['bromine', 'Br'], ['iodine', 'I'], ['astatine', 'At']]
Would your solution work regardless of whether you knew beforehand the length of the string or list (e.g. if you wanted to apply the solution to a set of lists of different lengths)? If not, try to change your approach to make it more robust.
Hint: Remember that indices can be negative as well as positive
Non-Continuous Slices
So far we’ve seen how to use slicing to take single blocks of successive entries from a sequence. But what if we want to take a subset of entries that aren’t next to each other in the sequence?
You can achieve this by providing a third argument to the range within the brackets, called the step size. The example below shows how you can take every third entry in a list:
PYTHON
primes = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37]
subset = primes[0:12:3]
print('subset', subset)
OUTPUT
subset [2, 7, 17, 29]
Notice that the slice taken begins with the first entry in the range, followed by entries taken at equally-spaced intervals (the steps) thereafter. If you wanted to begin the subset with the third entry, you would need to specify that as the starting point of the sliced range:
PYTHON
primes = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37]
subset = primes[2:12:3]
print('subset', subset)
OUTPUT
subset [5, 13, 23, 37]
Use the step size argument to create a new string that contains only every other character in the string “In an octopus’s garden in the shade”. Start with creating a variable to hold the string:
What slice of beatles
will produce the following output
(i.e., the first character, third character, and every other character
through the end of the string)?
OUTPUT
I notpssgre ntesae
If you want to take a slice from the beginning of a sequence, you can omit the first index in the range:
PYTHON
date = 'Monday 4 January 2016'
day = date[0:6]
print('Using 0 to begin range:', day)
day = date[:6]
print('Omitting beginning index:', day)
OUTPUT
Using 0 to begin range: Monday
Omitting beginning index: Monday
And similarly, you can omit the ending index in the range to take a slice to the very end of the sequence:
PYTHON
months = ['jan', 'feb', 'mar', 'apr', 'may', 'jun', 'jul', 'aug', 'sep', 'oct', 'nov', 'dec']
sond = months[8:12]
print('With known last position:', sond)
sond = months[8:len(months)]
print('Using len() to get last entry:', sond)
sond = months[8:]
print('Omitting ending index:', sond)
OUTPUT
With known last position: ['sep', 'oct', 'nov', 'dec']
Using len() to get last entry: ['sep', 'oct', 'nov', 'dec']
Omitting ending index: ['sep', 'oct', 'nov', 'dec']
Overloading
+
usually means addition, but when used on strings or
lists, it means “concatenate”. Given that, what do you think the
multiplication operator *
does on lists? In particular,
what will be the output of the following code?
[2, 4, 6, 8, 10, 2, 4, 6, 8, 10]
[4, 8, 12, 16, 20]
[[2, 4, 6, 8, 10],[2, 4, 6, 8, 10]]
[2, 4, 6, 8, 10, 4, 8, 12, 16, 20]
The technical term for this is operator overloading: a
single operator, like +
or *
, can do different
things depending on what it’s applied to.
Key Points
-
[value1, value2, value3, ...]
creates a list. - Lists can contain any Python object, including lists (i.e., list of lists).
- Lists are indexed and sliced with square brackets (e.g., list\[0\] and list\[2:9\]), in the same way as strings and arrays.
- Lists are mutable (i.e., their values can be changed in place).
- Strings are immutable (i.e., the characters in them cannot be changed).
Content from Repeating Actions with Loops
Last updated on 2024-02-23 | Edit this page
Estimated time: 30 minutes
Overview
Questions
- How can I do the same operations on many different values?
Objectives
- Explain what a
for
loop does. - Correctly write
for
loops to repeat simple calculations. - Trace changes to a loop variable as the loop runs.
- Trace changes to other variables as they are updated by a
for
loop.
In the episode about visualizing data, we wrote Python code that
plots values of interest from our first dataset
(gapminder_gdp_europe.csv
).
We still have four more datasets to perform our analysis over, and we’ll want to create plots for all of our data sets. Preferably, we’d do this with a single statement, and to do that, we’ll have to teach the computer how to repeat things.
An example task that we might want to repeat is accessing numbers in a list, which we will do by printing each number on a line of its own.
In Python, a list is basically an ordered collection of elements, and
every element has a unique number associated with it — its index. This
means that we can access elements in a list using their indices. For
example, we can get the first number in the list odds
, by
using odds[0]
. One way to print each number is to use four
print
statements:
OUTPUT
1
3
5
7
This is a bad approach for three reasons:
Not scalable. Imagine you need to print a list that has hundreds of elements. It might be easier to type them in manually.
Difficult to maintain. If we want to decorate each printed element with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for small lists, it would definitely be a problem for longer ones.
Fragile. If we use it with a list that has more elements than what we initially envisioned, it will only display part of the list’s elements. A shorter list, on the other hand, will cause an error because it will be trying to display elements of the list that do not exist.
OUTPUT
1
3
5
ERROR
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-3-7974b6cdaf14> in <module>()
3 print(odds[1])
4 print(odds[2])
----> 5 print(odds[3])
IndexError: list index out of range
Here’s a better approach: a for loop
OUTPUT
1
3
5
7
This is shorter — certainly shorter than something that prints every number in a hundred-number list — and more robust as well:
OUTPUT
1
3
5
7
9
11
The improved version uses a for loop to repeat an operation — in this case, printing — once for each thing in a sequence. The general form of a loop is:
Using the odds example above, the loop might look like this:
Each number (num
) in the variable odds
is
looped through and printed one number after another. The other numbers
in the diagram denote which loop cycle the number was printed in (1
being the first loop cycle, and 6 being the final loop cycle).
We can call the loop
variable anything we like, but there must be a colon at the end of
the line starting the loop, and we must indent anything we want to run
inside the loop. Unlike many other languages, there is no command to
signify the end of the loop body (e.g. end for
); everything
indented after the for
statement belongs to the loop.
What’s in a name?
In the example above, the loop variable was given the name
num
as a mnemonic; it is short for ‘number’. We can choose
any name we want for variables. We might just as easily have chosen the
name banana
for the loop variable, as long as we use the
same name when we invoke the variable inside the loop:
OUTPUT
1
3
5
7
9
11
It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing.
Here’s another loop that repeatedly updates a variable:
PYTHON
length = 0
names = ['Curie', 'Darwin', 'Turing']
for value in names:
length = length + 1
print('There are', length, 'names in the list.')
OUTPUT
There are 3 names in the list.
It’s worth tracing the execution of this little program step by step.
Since there are three names in names
, the statement on line
4 will be executed three times. The first time around,
length
is zero (the value assigned to it on line 1) and
value
is Curie
. The statement adds 1 to the
old value of length
, producing 1, and updates
length
to refer to that new value. The next time around,
value
is Darwin
and length
is 1,
so length
is updated to be 2. After one more update,
length
is 3; since there is nothing left in
names
for Python to process, the loop finishes and the
print
function on line 5 tells us our final answer.
Note that a loop variable is a variable that is being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:
PYTHON
name = 'Rosalind'
for name in ['Curie', 'Darwin', 'Turing']:
print(name)
print('after the loop, name is', name)
OUTPUT
Curie
Darwin
Turing
after the loop, name is Turing
Note also that finding the length of an object is such a common
operation that Python actually has a built-in function to do it called
len
:
OUTPUT
4
len
is much faster than any function we could write
ourselves, and much easier to read than a two-line loop. It will also
give us the length of many other things that we haven’t met yet, so we
should always use it when we can.
From 1 to N
Python has a built-in function called range
that
generates a sequence of numbers. range
can accept 1, 2, or
3 parameters.
- If one parameter is given,
range
generates a sequence of that length, starting at zero and incrementing by 1. For example,range(3)
produces the numbers0, 1, 2
. - If two parameters are given,
range
starts at the first and ends just before the second, incrementing by one. For example,range(2, 5)
produces2, 3, 4
. - If
range
is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example,range(3, 10, 2)
produces3, 5, 7, 9
.
Using range
, write a loop that uses range
to print the first 3 natural numbers:
The body of the loop is executed 6 times.
Summing a list
Write a loop that calculates the sum of elements in a list by adding
each element and printing the final value, so
[124, 402, 36]
prints 562
Computing the Value of a Polynomial
The built-in function enumerate
takes a sequence (e.g. a
list) and generates a new sequence of the
same length. Each element of the new sequence is a pair composed of the
index (0, 1, 2,…) and the value from the original sequence:
The code above loops through a_list
, assigning the index
to idx
and the value to val
.
Suppose you have encoded a polynomial as a list of coefficients in the following way: the first element is the constant term, the second element is the coefficient of the linear term, the third is the coefficient of the quadratic term, etc.
OUTPUT
97
Write a loop using enumerate(coeffs)
which computes the
value y
of any polynomial, given x
and
coeffs
.
Key Points
- Use
for variable in sequence
to process the elements of a sequence one at a time. - The body of a
for
loop must be indented. - Use
len(thing)
to determine the length of something that contains other values.
Content from Analyzing Data from Multiple Files
Last updated on 2024-02-23 | Edit this page
Estimated time: 20 minutes
Overview
Questions
- How can I do the same operations on many different files?
Objectives
- Use a library function to get a list of filenames that match a wildcard pattern.
- Write a
for
loop to process multiple files.
As a final piece to processing our GDP data, we need a way to get a
list of all the files in our data
directory whose names
start with gapminder_
and end with .csv
. The
following library will help us to achieve this:
The glob
library contains a function, also called
glob
, that finds files and directories whose names match a
pattern. We provide those patterns as strings: the character
*
matches zero or more characters, while ?
matches any one character. We can use this to get the names of all the
CSV files in the current directory:
OUTPUT
['gapminder_gdp_americas.csv', 'gapminder_gdp_africa.csv', 'gapminder_gdp_europe.csv',
'gapminder_gdp_asia.csv', 'gapminder_gdp_oceania.csv']
As these examples show, glob.glob
’s result is a list of
file and directory paths in arbitrary order. This means we can loop over
it to do something with each filename in turn. In our case, the
“something” we want to do is generate a set of plots for each file in
our GDP dataset.
Determining Matches
Which of these files is not matched by the expression
glob.glob('data/*as*.csv')
?
data/gapminder_gdp_africa.csv
data/gapminder_gdp_americas.csv
data/gapminder_gdp_asia.csv
1 is not matched by the glob.
Minimum File Size
Modify this program so that it prints the number of records in the file that has the fewest records.
PYTHON
import glob
import pandas as pd
fewest = ____
for filename in glob.glob('data/*.csv'):
dataframe = pd.____(filename)
fewest = min(____, dataframe.shape[0])
print('smallest file has', fewest, 'records')
Note that the [DataFrame.shape()
method][shape-method]
returns a tuple with the number of rows and columns of the data
frame.
PYTHON
import glob
import pandas as pd
fewest = float('Inf')
for filename in glob.glob('data/*.csv'):
dataframe = pd.read_csv(filename)
fewest = min(fewest, dataframe.shape[0])
print('smallest file has', fewest, 'records')
You might have chosen to initialize the fewest
variable
with a number greater than the numbers you’re dealing with, but that
could lead to trouble if you reuse the code with bigger numbers. Python
lets you use positive infinity, which will work no matter how big your
numbers are. What other special strings does the [float
function][float-function] recognize?
If we want to start by analyzing just the first three files in
alphabetical order, we can use the sorted
built-in function
to generate a new sorted list from the glob.glob
output:
PYTHON
import glob
import pandas as pd
import matplotlib.pyplot as plt
filenames = sorted(glob.glob('data/gapminder_*.csv'))
filenames = filenames[0:3]
for filename in filenames:
print(filename)
continent = filename[14:-4].capitalize()
data_gdp = pd.read_csv(filename, index_col='country')
fig = plt.figure(figsize=(18.0, 3.0))
axes_1 = fig.add_subplot(1, 3, 1)
axes_2 = fig.add_subplot(1, 3, 2)
axes_3 = fig.add_subplot(1, 3, 3)
axes_1.set_title('Min')
axes_1.set_ylabel('GDP/capita')
axes_1.plot(data_gdp.min(axis='rows'))
axes_2.set_title('Max')
axes_2.plot(data_gdp.max(axis='rows'))
axes_3.set_title('Average')
axes_3.plot(data_gdp.mean(axis='rows'))
fig.suptitle('GDP/capita statistics for countries in ' + continent)
fig.tight_layout()
plt.show()
OUTPUT
data/gapminder_gdp_africa.csv
OUTPUT
data/gapminder_gdp_americas.csv
OUTPUT
data/gapminder_gdp_asia.csv
The average plot generated for the Americas dataset looks a bit strange. How is it possible that the average value across the years is flat? Also, we find a similar behaviour for the minimum graph for the Asia dataset, where in this case it’s always 0.
From inspecting the data we can see that some entries for the Asia dataset has a 0 value. This may suggest that there were potential issues with data collection. The Americas dataset, however, doesn’t show any clear indication by visually inspecting the data, nevertheless, it seems very improbable that the average remained constant for the whole time.
Comparing Data
Write a program that reads in the regional data sets and plots the average GDP per capita for each region over time in a single chart.
This solution builds a useful legend by using the string
split
method to extract the region
from
the path ‘data/gapminder_gdp_a_specific_region.csv’.
PYTHON
import glob
import pandas as pd
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1,1)
for filename in glob.glob('data/gapminder_gdp*.csv'):
dataframe = pd.read_csv(filename)
# extract <region> from the filename, expected to be in the format 'data/gapminder_gdp_<region>.csv'.
# we will split the string using the split method and `_` as our separator,
# retrieve the last string in the list that split returns (`<region>.csv`),
# and then remove the `.csv` extension from that string.
region = filename.split('_')[-1][:-4]
dataframe.mean().plot(ax=ax, label=region)
plt.legend()
plt.show()
After spending some time investigating the different statistical plots, we gain some insight into the various datasets.
The datasets appear to fall into two categories:
- seemingly “normal” datasets but display suspicious average values (such as Americas)
- “bad” datasets that shows 0 for the minima across the years (maybe due to missing data?) for different countries each year.
Key Points
- Use
glob.glob(pattern)
to create a list of files whose names match a pattern. - Use
*
in a pattern to match zero or more characters, and?
to match any single character.
Content from Making Choices
Last updated on 2024-02-23 | Edit this page
Estimated time: 30 minutes
Overview
Questions
- How can my programs do different things based on data values?
Objectives
- Write conditional statements including
if
,elif
, andelse
branches. - Correctly evaluate expressions containing
and
andor
.
In our last lesson, we discovered something suspicious was going on in our GDP data by drawing some plots. How can we use Python to automatically recognize the different features we saw, and take a different action for each? In this lesson, we’ll learn how to write code that runs only when certain conditions are true.
Conditionals
We can ask Python to take different actions, depending on a
condition, with an if
statement:
OUTPUT
not greater
done
The second line of this code uses the keyword if
to tell
Python that we want to make a choice. If the test that follows the
if
statement is true, the body of the if
(i.e., the set of lines indented underneath it) is executed, and
“greater” is printed. If the test is false, the body of the
else
is executed instead, and “not greater” is printed.
Only one or the other is ever executed before continuing on with program
execution to print “done”:
Conditional statements don’t have to include an else
. If
there isn’t one, Python simply does nothing if the test is false:
PYTHON
num = 53
print('before conditional...')
if num > 100:
print(num, 'is greater than 100')
print('...after conditional')
OUTPUT
before conditional...
...after conditional
We can also chain several tests together using elif
,
which is short for “else if”. The following Python code uses
elif
to print the sign of a number.
PYTHON
num = -3
if num > 0:
print(num, 'is positive')
elif num == 0:
print(num, 'is zero')
else:
print(num, 'is negative')
OUTPUT
-3 is negative
Note that to test for equality we use a double equals sign
==
rather than a single equals sign =
which is
used to assign values.
Comparing in Python
Along with the >
and ==
operators we
have already used for comparing values in our conditionals, there are a
few more options to know about:
-
>
: greater than -
<
: less than -
==
: equal to -
!=
: does not equal -
>=
: greater than or equal to -
<=
: less than or equal to
We can also combine tests using and
and or
.
and
is only true if both parts are true:
PYTHON
if (1 > 0) and (-1 >= 0):
print('both parts are true')
else:
print('at least one part is false')
OUTPUT
at least one part is false
while or
is true if at least one part is true:
OUTPUT
at least one test is true
True
and False
True
and False
are special words in Python
called booleans
, which represent truth values. A statement
such as 1 < 0
returns the value False
,
while -1 < 0
returns the value True
.
Checking our Data
Now that we’ve seen how conditionals work, we can use them to check
for the suspicious features we saw in our inflammation data. We are
about to use functions provided by the pandas
module again.
Therefore, if you’re working in a new Python session, make sure to load
the module with:
From the first set of plots, we saw that the minimum and average exhibit a strange behavior for some of our dataset. Wouldn’t it be a good idea to detect such behavior and report it as suspicious? Let’s do that! However, instead of checking every entry manually, let’s check if the minimum and the maximum for the minimum across years is the same.
PYTHON
min_data = data.min(axis='rows')
min_min_data = min_data.min()
max_min_data = min_data.max()
if min_min_data == 0 and max_min_data == 0:
print('Suspicious looking minima!')
We also saw a different problem with America dataset; the average
across the years was constant (looks like someone had manipulated the
data). We can also check for this with an elif
condition:
PYTHON
elif round(data.mean(axis='rows').min()) == round(data.mean(axis='rows').max()):
print('Average is flat!')
And if neither of these conditions are true, we can use
else
to give the all-clear:
Let’s test that out:
PYTHON
data = pd.read_csv('data/gapminder_gdp_asia.csv', index_col='country')
min_data = data.min(axis='rows')
min_min_data = min_data.min()
max_min_data = min_data.max()
if min_min_data == 0 and max_min_data == 0:
print('Suspicious looking minima!')
elif round(data.mean(axis='rows').min()) == round(data.mean(axis='rows').max()):
print('Average is flat!')
else:
print('Seems OK!')
OUTPUT
Suspicious looking minima!
PYTHON
data = pd.read_csv('data/gapminder_gdp_americas.csv', index_col='country')
min_data = data.min(axis='rows')
min_min_data = min_data.min()
max_min_data = min_data.max()
if min_min_data == 0 and max_min_data == 0:
print('Suspicious looking minima!')
elif round(data.mean(axis='rows').min()) == round(data.mean(axis='rows').max()):
print('Average is flat!')
else:
print('Seems OK!')
OUTPUT
Average is flat!
In this way, we have asked Python to do something different depending
on the condition of our data. Here we printed messages in all cases, but
we could also imagine not using the else
catch-all so that
messages are only printed when something is wrong, freeing us from
having to manually examine every plot for features we’ve seen
before.
C gets printed because the first two conditions,
4 > 5
and 4 == 5
, are not true, but
4 < 5
is true. In this case only one of these conditions
can be true for at a time, but in other scenarios multiple
elif
conditions could be met. In these scenarios only the
action associated with the first true elif
condition will
occur, starting from the top of the conditional section.
This contrasts with the case of multiple if
statements,
where every action can occur as long as their condition is met.
What Is Truth?
True
and False
booleans are not the only
values in Python that are true and false. In fact, any value
can be used in an if
or elif
. After reading
and running the code below, explain what the rule is for which values
are considered true and which are considered false.
That’s Not Not What I Meant
Sometimes it is useful to check whether some condition is not true.
The Boolean operator not
can do this explicitly. After
reading and running the code below, write some if
statements that use not
to test the rule that you
formulated in the previous challenge.
Close Enough
Write some conditions that print True
if the variable
a
is within 10% of the variable b
and
False
otherwise. Compare your implementation with your
partner’s: do you get the same answer for all possible pairs of
numbers?
There is a built-in
function abs
that returns the absolute value of a
number:
OUTPUT
12
In-Place Operators
Python (and most other languages in the C family) provides in-place operators that work like this:
PYTHON
x = 1 # original value
x += 1 # add one to x, assigning result back to x
x *= 3 # multiply x by 3
print(x)
OUTPUT
6
Write some code that sums the positive and negative numbers in a list separately, using in-place operators. Do you think the result is more or less readable than writing the same without in-place operators?
PYTHON
positive_sum = 0
negative_sum = 0
test_list = [3, 4, 6, 1, -1, -5, 0, 7, -8]
for num in test_list:
if num > 0:
positive_sum += num
elif num == 0:
pass
else:
negative_sum += num
print(positive_sum, negative_sum)
Here pass
means “don’t do anything”. In this particular
case, it’s not actually needed, since if num == 0
neither
sum needs to change, but it illustrates the use of elif
and
pass
.
Counting Vowels
- Write a loop that counts the number of vowels in a character string.
- Test it on a few individual words and full sentences.
- Once you are done, compare your solution to your neighbor’s. Did you make the same decisions about how to handle the letter ‘y’ (which some people think is a vowel, and some do not)?
Trimming Values
Fill in the blanks so that this program creates a new list containing zeroes where the original list’s values were negative and ones where the original list’s values were positive.
PYTHON
original = [-1.5, 0.2, 0.4, 0.0, -1.3, 0.4]
result = ____
for value in original:
if ____:
result.append(0)
else:
____
print(result)
OUTPUT
[0, 1, 1, 1, 0, 1]
Initializing
Modify this program so that it finds the largest and smallest values in the list no matter what the range of values originally is.
PYTHON
values = [...some test data...]
smallest, largest = None, None
for v in values:
if ____:
smallest, largest = v, v
____:
smallest = min(____, v)
largest = max(____, v)
print(smallest, largest)
What are the advantages and disadvantages of using this method to find the range of the data?
PYTHON
values = [-2,1,65,78,-54,-24,100]
smallest, largest = None, None
for v in values:
if smallest is None and largest is None:
smallest, largest = v, v
else:
smallest = min(smallest, v)
largest = max(largest, v)
print(smallest, largest)
If you wrote == None
instead of is None
,
that works too, but Python programmers always write is None
because of the special way None
works in the language.
It can be argued that an advantage of using this method would be to
make the code more readable. However, a disadvantage is that this code
is not efficient because within each iteration of the for
loop statement, there are two more loops that run over two numbers each
(the min
and max
functions). It would be more
efficient to iterate over each number just once:
PYTHON
values = [-2,1,65,78,-54,-24,100]
smallest, largest = None, None
for v in values:
if smallest is None or v < smallest:
smallest = v
if largest is None or v > largest:
largest = v
print(smallest, largest)
Now we have one loop, but four comparison tests. There are two ways we could improve it further: either use fewer comparisons in each iteration, or use two loops that each contain only one comparison test. The simplest solution is often the best:
Key Points
- Use
if condition
to start a conditional statement,elif condition
to provide additional tests, andelse
to provide a default. - The bodies of the branches of conditional statements must be indented.
- Use
==
to test for equality. -
X and Y
is only true if bothX
andY
are true. -
X or Y
is true if eitherX
orY
, or both, are true. - Zero, the empty string, and the empty list are considered false; all other numbers, strings, and lists are considered true.
-
True
andFalse
represent truth values.
Content from Creating Functions
Last updated on 2024-02-23 | Edit this page
Estimated time: 30 minutes
Overview
Questions
- How can I define new functions?
- What’s the difference between defining and calling a function?
- What happens when I call a function?
Objectives
- Define a function that takes parameters.
- Return a value from a function.
- Test and debug a function.
- Set default values for function parameters.
- Explain why we should divide programs into small, single-purpose functions.
Prerequisite
In this lesson we are going to be using the data in the
penguin_data.csv
file, which is a subset of the freely
available dataset palmerpenguins
.
This dataset contains the species, culmen length, culmen depth, flipper
length and mass of 343 penguins observed on the Palmer archipelago,
Antarctica.
If you are starting a new notebook, you’ll need to
import
Pandas and load this data into a variable, which we
will call penguins
. We have assigned each penguin a name,
which we will use as the row labels (and pass to the
index_col
parameter).
We will also be making plots, so we’ll need to import
matplotlib.pyplot
like we did in lesson 3. And finally, we
also need the mathematical constant \(\pi\) (pi
), which we can
import
from the math
library that comes with
Python. For this we will be using from
to only import a
single “thing” (pi in this case).
PYTHON
from math import pi
import pandas as pd
import matplotlib.pyplot as plt
penguins = pd.read_csv('data/penguin_data.csv', index_col='name')
print("Pi is:", pi)
print("Our dataset looks like:")
print(penguins.head(5))
OUTPUT
Pi is: 3.141592653589793
Our dataset looks like:
id species culmen depth (mm) culmen length (mm) \
name
lyndale N34A2 Gentoo 16.3 51.5
drexel N32A1 Adelie 16.6 35.9
delaware N56A2 Gentoo 16.0 48.6
phillips N20A2 Gentoo 16.8 49.8
south shore N65A2 Chinstrap 18.8 51.0
flipper length (mm) mass (kg)
name
lyndale 230.0 5.50
drexel 190.0 3.05
delaware 230.0 5.80
phillips 230.0 5.70
south shore 203.0 4.10
Having recently returned from a research trip to Antarctica, a researcher has hypothesised that penguin species with larger bills are able to consume more food than those with smaller bills. The trouble is, the data that’s been collected doesn’t record the information they want directly - we will need to infer this information from the data that has been recorded.
We’re going to have to do a lot of calculations with the data to help justify this researcher’s claims. It would be helpful if we didn’t have to manually type out these calculations every time we want to perform them. This is where functions come in: given some inputs, they define a sequence of steps which produce an output.
Functions are like recipes
Python views functions in the same ways as humans might view recipes when cooking. Given some ingredients (the inputs), you follow the recipe (the instructions/steps), to produce a meal (the output). You might decide to switch out the vegetables you’re using, or switch chips for something like sweet potato fried, and so you get a different meal even if the steps you take to make the meal are the same.
You only have to look in one place for the recipe to know what you’re doing. Similarly, functions let us write one set of instructions that can be run multiple times in our code. This also helps if we find a bug in our instructions - we only have to change the instructions in one place (the function) rather than all over our notebook!
The researcher informs us that we can treat the bill of a penguin as a cylinder, with the “depth” being the diameter of the cylinder and the “length” the height. The volume of a cylinder is given by \[ \text{cylinder volume} = \text{cylinder height} \times \pi \left(\text{cylinder radius} \right)^2. \] This is not a simple calculation to write out every time we need to do it, and we will most likely want to do this calculation a lot in our analysis! So we can define a function that can perform this calculation for us.
A function always start with “def”, followed by the name of the
function (cylinder_volume
). After the name, we put in
brackets the parameters (or arguments) that the function takes
(height, radius
). The body concludes with a
return
keyword that tells Python the value that this
function should provide as the output. In this function we’re also using
**
, that’s the operator can be used to raise a number to a
power.
When we call the function, the values we pass to it are assigned to those variables so that we can use them inside the function. Inside the function, we use a return statement to send a result back to whoever asked for it. Let’s try running our function:
The volume of a cylinder with radius 1 and height 1, should be
1 * pi * 1 ** 2 = pi
This command should call our function, using “1” as the input for
height
, and “1” as the input for radius
, and
return the function value. In fact, calling our own function is no
different from calling any other function:
PYTHON
print('cylinder with no height has volume:', cylinder_volume(0, 1))
print('cylinder with height 1 and radius 1 has volume:', cylinder_volume(1, 1))
OUTPUT
cylinder with no height has volume: 0.0
cylinder with height 1 and radius 1 has volume: 3.141592653589793
We’ve successfully called the function that we defined, and we have access to the value that we returned.
Composing Functions
Now that we have a function to calculate the volume of a cylinder, we can start to estimate the bill sizes of our penguins. From the researcher’s explanation, the bill size of a penguin can be worked out as: \[ \text{bill size} = \text{culmen height} \times \pi \left(\frac{\text{culmen depth}}{2} \right)^2, \] since the culmen depth is the cylinder’s diameter — which is double the radius.
So how can we write out a function to estimate the bill size from the
culmen length and culmen depth? We could write out the formula above,
but we don’t need to. Instead, we can compose our
cylinder_volume
function with a statement that divides the
diameter by 2 to obtain the radius. We use it then to compute the bill
size of the penguin ‘phillips’.
PYTHON
def penguin_bill_size(culmen_length, culmen_depth):
culmen_radius = culmen_depth / 2
culmen_size = cylinder_volume(culmen_length, culmen_radius)
return culmen_size
phillips_length = penguins.loc['phillips','culmen length (mm)']
phillips_depth = penguins.loc['phillips','culmen depth (mm)']
print('Penguin phillips has bill size', penguin_bill_size(phillips_length, phillips_depth), "mm^3")
OUTPUT
Penguin phillips has bill size 11039.204726337332 mm^3
This is our first taste of how larger programs are built: we define basic operations, then combine them in ever-larger chunks to get the effect we want. Real-life functions will usually be larger than the ones shown here — typically half a dozen to a few dozen lines — but they shouldn’t ever be much longer than that, or the next person who reads it won’t be able to understand what’s going on.
Variable Scope
In composing our cylinder_volume
function, we created
variables inside of those functions —
culmen_radius
and culmen_size
inside
penguin_bill_size
, and volume
within
cylinder_radius
. We refer to these variables as local variables because they no
longer exist once the function is done executing. If we try to access
their values outside of the function, we will encounter an error:
ERROR
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
/tmp/ipykernel_8392/2249064020.py in <module>
----> 1 print('The culmen_radius was:', culmen_size)
NameError: name 'culmen_size' is not defined
If you want to reuse the bill size you’ve calculated, you can store the result of the function call in a variable:
OUTPUT
bill_size was: 3.141592653589793
The variable bill_size
, being defined outside any
function, is said to be global.
Inside a function, one can read the value of such global variables.
For example, our cylinder_volume
function is able to read
the value of pi
, even though we didn’t actually assign
pi
within the function!
PYTHON
def cylinder_volume(height, radius):
print("This function knows the value of pi is", pi)
volume = height * pi * radius ** 2
return volume
volume = cylinder_volume(0, 0)
print("Volume was:", volume)
OUTPUT
This function knows the value of pi is 3.141592653589793
Volume was: 0.0
Operating on dataframe columns
Our penguin_bill_size
function performs well when we
give it two individual numbers for the culmen length and depth. But our
dataframe has 343 penguins in it — we don’t want to call the function
343 times if we can avoid it! Fortunately for us, dataframes are clever
enough to allow us to “operate along columns”. If we want to do the
same calculation with all the values in two dataframe columns,
we can just give our function the dataframe columns containing
all the culmen lengths and culmen depths:
PYTHON
bill_sizes = penguin_bill_size(penguins['culmen length (mm)'], penguins['culmen depth (mm)'])
print(bill_sizes)
This is an example of vectorisation; Python can perform the
same command across a set of data much faster than it would if we called
the penguin_bill_size
343 times on the individual culmen
lengths and depths!
Tidying up
Now that we know how to wrap bits of code up in functions, we can
make our analysis of the size of penguin bills easier to read and reuse.
The researcher was interested in whether different penguin species have
different bill sizes, and a natural way to test this is to produce a box
plot for each of the species we have data on. First, let’s make a
visualise_bill_sizes
function that generates a box plot for
a single species of penguin:
PYTHON
def visualise_bill_sizes(penguin_data, species_name):
if species_name in penguin_data['species'].values:
# We have some data on that species of penguin!
this_species = penguins.loc[penguins['species'] == species_name]
# Now let's work out the bill sizes of these penguins
bill_sizes = penguin_bill_size(this_species['culmen length (mm)'], this_species['culmen depth (mm)'])
# Now let's make a plot of these bill sizes
fig = plt.figure(figsize=(10., 3.))
bill_sizes.plot.box(vert=False)
plt.title(species_name + " penguins")
plt.xlabel("Bill sizes (mm^3)")
plt.show()
else:
print("There is no data on penguin species:", species_name)
This function checks first whether the species name requested exists on the dataframe. If the species exists, then it proceeds to generate the plot, otherwise it prints a message informing that the species is not in the dataframe.
Wait! Didn’t we forget to specify what this functions should return?
Well, we didn’t. In Python, functions are not required to include a
return
statement and can be used for the sole purpose of
grouping together pieces of code that conceptually do one thing. In such
cases, function names usually describe what they do, e.g.
visualise_bill_sizes
.
Notice that rather than jumbling this code together in one giant
for
loop, we can now read and reuse both ideas separately.
We can produce a box plot for each species of penguin using a
for
loop, and by putting our function inside it!
PYTHON
penguins = pd.read_csv('data/penguin_data.csv', index_col='name')
unique_species = penguins['species'].unique
for species in unique_species:
visualise_bill_sizes(penguins, species)
By giving our functions human-readable names, we can more easily read
and understand what is happening in the for
loop. Even
better, if at some later date we want to use either of those pieces of
code again, we can do so in a single line.
Combining Strings
“Adding” two strings produces their concatenation:
'a' + 'b'
is 'ab'
. Write a function called
fence
that takes two parameters called
original
and wrapper
and returns a new string
that has the wrapper character at the beginning and end of the original.
A call to your function should look like this:
OUTPUT
*name*
Return versus print
Note that return
and print
are not
interchangeable. print
is a Python function that
prints data to the screen. It enables us, users, see
the data. return
statement, on the other hand, makes data
visible to the program. Let’s have a look at the following function:
Question: What will we see if we execute the following commands?
Python will first execute the function add
with
a = 7
and b = 3
, and, therefore, print
10
. However, because function add
does not
have a line that starts with return
(no return
“statement”), it will, by default, return nothing which, in Python
world, is called None
. Therefore, A
will be
assigned to None
and the last line (print(A)
)
will print None
. As a result, we will see:
OUTPUT
10
None
Rescaling an Array
Our researcher has decided that it would be much clearer if all the
bill sizes were rescaled so that the corresponding values in the
'species'
column lie in the range 0.0 to 1.0. Write a new
function rescaled_bill_sizes
that: - Takes a dataframe of
penguins as it’s input. - Calculates the bill sizes of all the penguins
in the dataframe, then rescales the values to be in this range. -
Returns the rescaled bill sizes as the output.
Hint: If L
and H
are the lowest and highest
values of the original bill sizes, then the replacement for a bill size
size
should be (size-L) / (H-L)
.
OUTPUT
259.81666666666666
278.15
273.15
0
k
is 0 because the k
inside the function
f2k
doesn’t know about the k
defined outside
the function. When the f2k
function is called, it creates a
local variable
k
. The function does not return any values and does not
alter k
outside of its local copy. Therefore the original
value of k
remains unchanged. Beware that a local
k
is created because f2k
internal statements
affect a new value to it. If k
was only
read
, it would simply retrieve the global k
value.
Variables Inside and Outside Functions (continued)
Do you recognise what is this function doing?
It converts temperature between Fahrenheit and Kelvin degrees. Probably, it would have been easier to understand if written in this way:
PYTHON
def fahr_to_kelvin(temp_f):
temp_celsius = (temp_f - 32) * (5/9)
temp_kelvin = temp_celsius + 273.15
return temp_kelvin
Naming functions and variables in a readable helps the next person who comes to read your code. That next person could be your future-self!
Producing a scatter plot
Write a function plot_bill_size_vs_flipper
that: - Takes
the penguin data and the name of a species as its arguments - Produces a
scatter plot for that species; with the flipper size on the x-axis and
bill size on the y-axis - Shows this plot on the screen.
Hint: plt.scatter(x_data, y_data)
produces a scatter
plot.
PYTHON
def plot_bill_size_vs_flipper(penguin_data, species_name):
if species_name in penguin_data['species'].values:
this_species = penguins.loc[penguins['species'] == species_name]
bill_sizes = penguin_bill_size(this_species['culmen length (mm)'], this_species['culmen depth (mm)'])
fig = plt.figure(figsize=(5., 5.))
plt.scatter(this_species['flipper length (mm)'], bill_sizes)
plt.title(species_name + " penguins")
plt.xlabel("Flipper length (mm)")
plt.ylabel("Bill sizes (mm^3)")
plt.show()
else:
print("There is no data on penguin species:", species_name)
Tidying up
Now that we know how to wrap bits of code up in functions, we can
make our GDP analysis easier to read and easier to reuse. First, let’s
make a visualize
function that generates our plots:
PYTHON
def visualize(filename):
continent = filename.split('_')[-1][:-4].capitalize()
data_gdp = pd.read_csv(filename, index_col='country')
fig = plt.figure(figsize=(18.0, 3.0))
axes_1 = fig.add_subplot(1, 3, 1)
axes_2 = fig.add_subplot(1, 3, 2)
axes_3 = fig.add_subplot(1, 3, 3)
axes_1.set_title('Min')
axes_1.set_ylabel('GDP/capita')
axes_1.plot(data_gdp.min(axis='rows'))
axes_2.set_title('Max')
axes_2.plot(data_gdp.max(axis='rows'))
axes_3.set_title('Average')
axes_3.plot(data_gdp.mean(axis='rows'))
fig.suptitle('GDP/capita statistics for countries in ' + continent)
fig.tight_layout()
plt.show()
and another function called detect_problems
that checks
for those systematics we noticed:
PYTHON
def detect_problems(filename):
data_gdp = pd.read_csv(filename, index_col='country')
min_data = data_gdp.min(axis='rows')
min_min_data = min_data.min()
max_min_data = min_data.max()
if min_min_data == 0 and max_min_data == 0:
print('Suspicious looking minima!')
elif round(data.mean(axis='rows').min()) == round(data.mean(axis='rows').max()):
print('Average is flat!')
else:
print('Seems OK!')
Notice that rather than jumbling this code together in one giant
for
loop, we can now read and reuse both ideas separately.
We can reproduce the previous analysis with a much simpler
for
loop:
PYTHON
filenames = sorted(glob.glob('data/gapminder_*.csv'))
for filename in filenames[:3]:
print(filename)
visualize(filename)
detect_problems(filename)
By giving our functions human-readable names, we can more easily read
and understand what is happening in the for
loop. Even
better, if at some later date we want to use either of those pieces of
code again, we can do so in a single line.
Testing and Documenting
Once we start putting things in functions so that we can re-use them, we need to start testing that those functions are working correctly. To see how to do this, let’s write a function to convert values between USD to GBP:
We could test this on our actual data, but since we don’t know what the values ought to be, it will be hard to tell if the result was correct. Instead, let’s input a value manually, 1 USD, and 0.8 USD-GBP rate:
OUTPUT
0.8
That looks right, so let’s try usd_to_gbp
on our real
data:
PYTHON
data_gdp = pd.read_csv('data/gapminder_gdp_oceania.csv', index_col='country')
print('in USD')
print(data_gdp.loc[:, 1952:1962])
print('in GBP')
print(usd_to_gbp(data_gdp.loc[:, 1952:1962], 0.8))
OUTPUT
in USD
1952 1957 1962
country
Australia 10039.59564 10949.64959 12217.22686
New Zealand 10556.57566 12247.39532 13175.67800
in GBP
1952 1957 1962
country
Australia 8031.676512 8759.719672 9773.781488
New Zealand 8445.260528 9797.916256 10540.542400
We have one more task to do first, though: we should write some documentation for our function to remind ourselves later what it’s for and how to use it.
The usual way to put documentation in software is to add comments like this:
PYTHON
# usd_to_gbp(data, usd_gbp_rate):
# return the input data converted to GBP
def usd_to_gbp(data, usd_gbp_rate):
return (data * usd_gbp_rate)
There’s a better way, though. If the first thing in a function is a string that isn’t assigned to a variable, that string is attached to the function as its documentation:
PYTHON
def usd_to_gbp(data, usd_gbp_rate):
"""Return the input data converted to GBP.
"""
return (data * usd_gbp_rate)
This is better because we can now ask Python’s built-in help system to show us the documentation for the function:
OUTPUT
Help on function usd_to_gbp in module __main__:
usd_to_gbp(data, usd_gbp_rate):
Return the input data converted to GBP.
A string like this is called a docstring. We don’t need to use triple quotes when we write one, but if we do, we can break the string across multiple lines:
PYTHON
def usd_to_gbp(data, usd_gbp_rate):
"""Return the input data converted to GBP.
Examples
--------
>>> usd_to_gbp(10, 0.80)
8.
"""
return (data * usd_gbp_rate)
help(usd_to_gbp)
OUTPUT
Help on function usd_to_gbp in module __main__:
usd_to_gbp(data, usd_gbp_rate):
Return the input data converted to GBP.
Examples
--------
>>> usd_to_gbp(10, 0.80)
8.
Defining Defaults
We have passed parameters to functions in two ways: directly, as in
type(data)
, and by name, as in
pd.read_csv(filename, index_col='country')
. but we still
need to say index_col=
:
ERROR
Traceback (most recent call last):
ParserWarning: Falling back to the 'python' engine because the 'c' engine does not support regex separators (separators > 1 char and different from '\s+' are interpreted as regex); you can avoid this warning by specifying engine='python'.
oceania = pd.read_csv('data/gapminder_gdp_oceania.csv', 'country')
To understand what’s going on, and make our own functions easier to
use, let’s re-define our usd_to_gbp
function like this:
PYTHON
def usd_to_gbp(data, usd_gbp_rate=0.8):
"""Return the input data converted to GBP (usd to gbp rate as 0.8 by default).
Examples
--------
>>> usd_to_gbp(10)
8.
"""
return (data * usd_gbp_rate)
The key change is that the second parameter is now written
usd_gbp_rate=0.8
instead of just usd_gbp_rate
.
If we call the function with two arguments, it works as it did
before:
OUTPUT
0.3
But we can also now call it with just one parameter, in which case
usd_gbp_rate
is automatically assigned the default value of 0.8.
This is handy: if we usually want a function to work one way, but occasionally need it to do something else, we can allow people to pass a parameter when they need to but provide a default to make the normal case easier. The example below shows how Python matches values to parameters:
PYTHON
def display(a=1, b=2, c=3):
print('a:', a, 'b:', b, 'c:', c)
print('no parameters:')
display()
print('one parameter:')
display(55)
print('two parameters:')
display(55, 66)
OUTPUT
no parameters:
a: 1 b: 2 c: 3
one parameter:
a: 55 b: 2 c: 3
two parameters:
a: 55 b: 66 c: 3
As this example shows, parameters are matched up from left to right, and any that haven’t been given a value explicitly get their default value. We can override this behavior by naming the value as we pass it in:
OUTPUT
only setting the value of c
a: 1 b: 2 c: 77
With that in hand, let’s look at the help for
pd.read_csv
:
OUTPUT
Help on function read_csv in module pandas.io.parsers.readers:
read_csv(filepath_or_buffer: 'FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str]', *, sep: 'str | None | lib.NoDefault' = <no_default>, ...
Read a comma-separated values (csv) file into DataFrame.
Also supports optionally iterating or breaking of the file
into chunks.
Additional help can be found in the online docs for
`IO Tools <https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html>`_.
Parameters
----------
...
There’s a lot of information here, but the most important part is the first couple of lines:
OUTPUT
read_csv(filepath_or_buffer: 'FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str]', *, sep: 'str | None | lib.NoDefault' = <no_default>, ...
This tells us that read_csv
has one parameter called
filepath_or_buffer
that doesn’t have a default value, and
many others that do. If we call the function like this:
then the filename is assigned to filepath_or_buffer
(which is what we want), but the index column string
'country'
is assigned to sep
rather than
index_col
, because sep
is the second parameter
in the list. However 'country'
isn’t a known delimiter so
our code produced an error message when we tried to run it. When we call
pd.read_csv
we don’t have to provide
filepath_or_buffer=
for the filename because it’s the first
item in the list, but if we want the 'country'
to be
assigned to the variable index_col
, we do have to
provide index_col=
for the second parameter since
ndex_col
is not the second parameter in the list.
Mixing Default and Non-Default Parameters
Given the following code:
PYTHON
def numbers(one, two=2, three, four=4):
n = str(one) + str(two) + str(three) + str(four)
return n
print(numbers(1, three=3))
what do you expect will be printed? What is actually printed? What rule do you think Python is following?
1234
one2three4
1239
SyntaxError
Given that, what does the following piece of code display when run?
a: b: 3 c: 6
a: -1 b: 3 c: 6
a: -1 b: 2 c: 6
a: b: -1 c: 2
Attempting to define the numbers
function results in
4. SyntaxError
. The defined parameters two
and
four
are given default values. Because one
and
three
are not given default values, they are required to be
included as arguments when the function is called and must be placed
before any parameters that have default values in the function
definition.
The given call to func
displays
a: -1 b: 2 c: 6
. -1 is assigned to the first parameter
a
, 2 is assigned to the next parameter b
, and
c
is not passed a value, so it uses its default value
6.
Key Points
- Define a function using
def function_name(parameter)
. - The body of a function must be indented.
- Call a function using
function_name(value)
. - Variables defined within a function can only be seen and used within the body of the function.
- Variables created outside of any function are called global variables.
- Within a function, we can access global variables.
- If we want to do the same calculation on all entries in our columns, we can pass the dataframe columns as the inputs to a function.
- Use
help(thing)
to view help for something. - Put docstrings in functions to provide help for that function.
- Specify default values for parameters when defining a function using
name=value
in the parameter list. - Parameters can be passed by matching based on name, by position, or by omitting them (in which case the default value is used).
- Put code whose parameters change frequently in a function, then call it with different parameter values to customize its behavior.
Content from Errors and Exceptions
Last updated on 2024-02-23 | Edit this page
Estimated time: 30 minutes
Overview
Questions
- How does Python report errors?
- How can I handle errors in Python programs?
Objectives
- To be able to read a traceback, and determine where the error took place and what type it is.
- To be able to describe the types of situations in which syntax errors, indentation errors, name errors, index errors, and missing file errors occur.
Every programmer encounters errors, both those who are just beginning, and those who have been programming for years. Encountering errors and exceptions can be very frustrating at times, and can make coding feel like a hopeless endeavour. However, understanding what the different types of errors are and when you are likely to encounter them can help a lot. Once you know why you get certain types of errors, they become much easier to fix.
Errors in Python have a very specific form, called a traceback. Let’s examine one:
PYTHON
# This code has an intentional error. You can type it directly or
# use it for reference to understand the error message below.
def favorite_ice_cream():
ice_creams = [
'chocolate',
'vanilla',
'strawberry'
]
print(ice_creams[3])
favorite_ice_cream()
ERROR
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-1-70bd89baa4df> in <module>()
9 print(ice_creams[3])
10
----> 11 favorite_ice_cream()
<ipython-input-1-70bd89baa4df> in favorite_ice_cream()
7 'strawberry'
8 ]
----> 9 print(ice_creams[3])
10
11 favorite_ice_cream()
IndexError: list index out of range
This particular traceback has two levels. You can determine the number of levels by looking for the number of arrows on the left hand side. In this case:
The first shows code from the cell above, with an arrow pointing to Line 11 (which is
favorite_ice_cream()
).The second shows some code in the function
favorite_ice_cream
, with an arrow pointing to Line 9 (which isprint(ice_creams[3])
).
The last level is the actual place where the error occurred. The
other level(s) show what function the program executed to get to the
next level down. So, in this case, the program first performed a function call to the function
favorite_ice_cream
. Inside this function, the program
encountered an error on Line 6, when it tried to run the code
print(ice_creams[3])
.
Long Tracebacks
Sometimes, you might see a traceback that is very long -- sometimes they might even be 20 levels deep! This can make it seem like something horrible happened, but the length of the error message does not reflect severity, rather, it indicates that your program called many functions before it encountered the error. Most of the time, the actual place where the error occurred is at the bottom-most level, so you can skip down the traceback to the bottom.
So what error did the program actually encounter? In the last line of
the traceback, Python helpfully tells us the category or type of error
(in this case, it is an IndexError
) and a more detailed
error message (in this case, it says “list index out of range”).
If you encounter an error and don’t know what it means, it is still important to read the traceback closely. That way, if you fix the error, but encounter a new one, you can tell that the error changed. Additionally, sometimes knowing where the error occurred is enough to fix it, even if you don’t entirely understand the message.
If you do encounter an error you don’t recognize, try looking at the official documentation on errors. However, note that you may not always be able to find the error there, as it is possible to create custom errors. In that case, hopefully the custom error message is informative enough to help you figure out what went wrong.
Reading Error Messages
Read the Python code and the resulting traceback below, and answer the following questions:
- How many levels does the traceback have?
- What is the function name where the error occurred?
- On which line number in this function did the error occur?
- What is the type of error?
- What is the error message?
PYTHON
# This code has an intentional error. Do not type it directly;
# use it for reference to understand the error message below.
def print_message(day):
messages = [
'Hello, world!',
'Today is Tuesday!',
'It is the middle of the week.',
'Today is Donnerstag in German!',
'Last day of the week!',
'Hooray for the weekend!',
'Aw, the weekend is almost over.'
]
print(messages[day])
def print_sunday_message():
print_message(7)
print_sunday_message()
ERROR
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-7-3ad455d81842> in <module>
16 print_message(7)
17
---> 18 print_sunday_message()
19
<ipython-input-7-3ad455d81842> in print_sunday_message()
14
15 def print_sunday_message():
---> 16 print_message(7)
17
18 print_sunday_message()
<ipython-input-7-3ad455d81842> in print_message(day)
11 'Aw, the weekend is almost over.'
12 ]
---> 13 print(messages[day])
14
15 def print_sunday_message():
IndexError: list index out of range
- 3 levels
print_message
- 13
IndexError
-
list index out of range
You can then infer that7
is not the right index to use withmessages
.
Better errors on newer Pythons
Newer versions of Python have improved error printouts. If you are debugging errors, it is often helpful to use the latest Python version, even if you support older versions of Python.
Syntax Errors
When you forget a colon at the end of a line, accidentally add one
space too many when indenting under an if
statement, or
forget a parenthesis, you will encounter a syntax error. This means that
Python couldn’t figure out how to read your program. This is similar to
forgetting punctuation in English: for example, this text is difficult
to read there is no punctuation there is also no capitalization why is
this hard because you have to figure out where each sentence ends you
also have to figure out where each sentence begins to some extent it
might be ambiguous if there should be a sentence break or not
People can typically figure out what is meant by text with no punctuation, but people are much smarter than computers. If Python doesn’t know how to read the program, it will give up and inform you with an error. For example:
ERROR
File "<ipython-input-3-6bb841ea1423>", line 1
def some_function()
^
SyntaxError: invalid syntax
Here, Python tells us that there is a SyntaxError
on
line 1, and even puts a little arrow in the place where there is an
issue. In this case the problem is that the function definition is
missing a colon at the end.
Actually, the function above has two issues with syntax. If
we fix the problem with the colon, we see that there is also an
IndentationError
, which means that the lines in the
function definition do not all have the same indentation:
ERROR
File "<ipython-input-4-ae290e7659cb>", line 4
return msg
^
IndentationError: unexpected indent
Both SyntaxError
and IndentationError
indicate a problem with the syntax of your program, but an
IndentationError
is more specific: it always means
that there is a problem with how your code is indented.
Tabs and Spaces
Some indentation errors are harder to spot than others. In
particular, mixing spaces and tabs can be difficult to spot because they
are both whitespace. In the
example below, the first two lines in the body of the function
some_function
are indented with tabs, while the third line
— with spaces. If you’re working in a Jupyter notebook, be sure to copy
and paste this example rather than trying to type it in manually because
Jupyter automatically replaces tabs with spaces.
Visually it is impossible to spot the error. Fortunately, Python does not allow you to mix tabs and spaces.
ERROR
File "<ipython-input-5-653b36fbcd41>", line 4
return msg
^
TabError: inconsistent use of tabs and spaces in indentation
Variable Name Errors
Another very common type of error is called a NameError
,
and occurs when you try to use a variable that does not exist. For
example:
ERROR
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-7-9d7b17ad5387> in <module>()
----> 1 print(a)
NameError: name 'a' is not defined
Variable name errors come with some of the most informative error messages, which are usually of the form “name ‘the_variable_name’ is not defined”.
Why does this error message occur? That’s a harder question to answer, because it depends on what your code is supposed to do. However, there are a few very common reasons why you might have an undefined variable. The first is that you meant to use a string, but forgot to put quotes around it:
ERROR
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-8-9553ee03b645> in <module>()
----> 1 print(hello)
NameError: name 'hello' is not defined
The second reason is that you might be trying to use a variable that
does not yet exist. In the following example, count
should
have been defined (e.g., with count = 0
) before the for
loop:
ERROR
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-9-dd6a12d7ca5c> in <module>()
1 for number in range(10):
----> 2 count = count + number
3 print('The count is:', count)
NameError: name 'count' is not defined
Finally, the third possibility is that you made a typo when you were
writing your code. Let’s say we fixed the error above by adding the line
Count = 0
before the for loop. Frustratingly, this actually
does not fix the error. Remember that variables are case-sensitive, so the variable
count
is different from Count
. We still get
the same error, because we still have not defined
count
:
ERROR
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-10-d77d40059aea> in <module>()
1 Count = 0
2 for number in range(10):
----> 3 count = count + number
4 print('The count is:', count)
NameError: name 'count' is not defined
Index Errors
Next up are errors having to do with containers (like lists and strings) and the items within them. If you try to access an item in a list or a string that does not exist, then you will get an error. This makes sense: if you asked someone what day they would like to get coffee, and they answered “caturday”, you might be a bit annoyed. Python gets similarly annoyed if you try to ask it for an item that doesn’t exist:
PYTHON
letters = ['a', 'b', 'c']
print('Letter #1 is', letters[0])
print('Letter #2 is', letters[1])
print('Letter #3 is', letters[2])
print('Letter #4 is', letters[3])
OUTPUT
Letter #1 is a
Letter #2 is b
Letter #3 is c
ERROR
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-11-d817f55b7d6c> in <module>()
3 print('Letter #2 is', letters[1])
4 print('Letter #3 is', letters[2])
----> 5 print('Letter #4 is', letters[3])
IndexError: list index out of range
Here, Python is telling us that there is an IndexError
in our code, meaning we tried to access a list index that did not
exist.
File Errors
The last type of error we’ll cover today are those associated with
reading and writing files: FileNotFoundError
. If you try to
read a file that does not exist, you will receive a
FileNotFoundError
telling you so. If you attempt to write
to a file that was opened read-only, Python 3 returns an
UnsupportedOperationError
. More generally, problems with
input and output manifest as OSError
s, which may show up as
a more specific subclass; you can see the
list in the Python docs. They all have a unique UNIX
errno
, which is you can see in the error message.
ERROR
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-14-f6e1ac4aee96> in <module>()
----> 1 file_handle = open('myfile.txt', 'r')
FileNotFoundError: [Errno 2] No such file or directory: 'myfile.txt'
One reason for receiving this error is that you specified an
incorrect path to the file. For example, if I am currently in a folder
called myproject
, and I have a file in
myproject/writing/myfile.txt
, but I try to open
myfile.txt
, this will fail. The correct path would be
writing/myfile.txt
. It is also possible that the file name
or its path contains a typo.
A related issue can occur if you use the “read” flag instead of the
“write” flag. Python will not give you an error if you try to open a
file for writing when the file does not exist. However, if you meant to
open a file for reading, but accidentally opened it for writing, and
then try to read from it, you will get an
UnsupportedOperation
error telling you that the file was
not opened for reading:
ERROR
---------------------------------------------------------------------------
UnsupportedOperation Traceback (most recent call last)
<ipython-input-15-b846479bc61f> in <module>()
1 file_handle = open('myfile.txt', 'w')
----> 2 file_handle.read()
UnsupportedOperation: not readable
These are the most common errors with files, though many others exist. If you get an error that you’ve never seen before, searching the Internet for that error type often reveals common reasons why you might get that error.
Identifying Syntax Errors
- Read the code below, and (without running it) try to identify what the errors are.
- Run the code, and read the error message. Is it a
SyntaxError
or anIndentationError
? - Fix the error.
- Repeat steps 2 and 3, until you have fixed all the errors.
Identifying Variable Name Errors
- Read the code below, and (without running it) try to identify what the errors are.
- Run the code, and read the error message. What type of
NameError
do you think this is? In other words, is it a string with no quotes, a misspelled variable, or a variable that should have been defined but was not? - Fix the error.
- Repeat steps 2 and 3, until you have fixed all the errors.
3 NameError
s for number
being misspelled,
for message
not defined, and for a
not being
in quotes.
Fixed version:
Key Points
- Tracebacks can look intimidating, but they give us a lot of useful information about what went wrong in our program, including where the error occurred and what type of error it was.
- An error having to do with the ‘grammar’ or syntax of the program is
called a
SyntaxError
. If the issue has to do with how the code is indented, then it will be called anIndentationError
. - A
NameError
will occur when trying to use a variable that does not exist. Possible causes are that a variable definition is missing, a variable reference differs from its definition in spelling or capitalization, or the code contains a string that is missing quotes around it. - Containers like lists and strings will generate errors if you try to
access items in them that do not exist. This type of error is called an
IndexError
. - Trying to read a file that does not exist will give you an
FileNotFoundError
. Trying to read a file that is open for writing, or writing to a file that is open for reading, will give you anIOError
.
Content from Defensive Programming
Last updated on 2024-02-23 | Edit this page
Estimated time: 40 minutes
Overview
Questions
- How can I make my programs more reliable?
Objectives
- Explain what an assertion is.
- Add assertions that check the program’s state is correct.
- Correctly add precondition and postcondition assertions to functions.
- Explain what test-driven development is, and use it when creating new functions.
- Explain why variables should be initialized using actual data values rather than arbitrary constants.
Our previous lessons have introduced the basic tools of programming: variables and lists, file I/O, loops, conditionals, and functions. What they haven’t done is show us how to tell whether a program is getting the right answer, and how to tell if it’s still getting the right answer as we make changes to it.
To achieve that, we need to:
- Write programs that check their own operation.
- Write and run tests for widely-used functions.
- Make sure we know what “correct” actually means.
The good news is, doing these things will speed up our programming, not slow it down. As in real carpentry — the kind done with lumber — the time saved by measuring carefully before cutting a piece of wood is much greater than the time that measuring takes.
Assertions
The first step toward getting the right answers from our programs is to assume that mistakes will happen and to guard against them. This is called defensive programming, and the most common way to do it is to add assertions to our code so that it checks itself as it runs. An assertion is simply a statement that something must be true at a certain point in a program. When Python sees one, it evaluates the assertion’s condition. If it’s true, Python does nothing, but if it’s false, Python halts the program immediately and prints the error message if one is provided. For example, this piece of code halts as soon as the loop encounters a value that isn’t positive:
PYTHON
numbers = [1.5, 2.3, 0.7, -0.001, 4.4]
total = 0.0
for num in numbers:
assert num > 0.0, 'Data should only contain positive values'
total += num
print('total is:', total)
ERROR
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-19-33d87ea29ae4> in <module>()
2 total = 0.0
3 for num in numbers:
----> 4 assert num > 0.0, 'Data should only contain positive values'
5 total += num
6 print('total is:', total)
AssertionError: Data should only contain positive values
Programs like the Firefox browser are full of assertions: 10-20% of the code they contain are there to check that the other 80–90% are working correctly. Broadly speaking, assertions fall into three categories:
A precondition is something that must be true at the start of a function in order for it to work correctly.
A postcondition is something that the function guarantees is true when it finishes.
An invariant is something that is always true at a particular point inside a piece of code.
For example, suppose we are representing rectangles using a tuple of four coordinates
(x0, y0, x1, y1)
, representing the lower left and upper
right corners of the rectangle. In order to do some calculations, we
need to normalize the rectangle so that the lower left corner is at the
origin and the longest side is 1.0 units long. This function does that,
but checks that its input is correctly formatted and that its result
makes sense:
PYTHON
def normalize_rectangle(rect):
"""Normalizes a rectangle so that it is at the origin and 1.0 units long on its longest axis.
Input should be of the format (x0, y0, x1, y1).
(x0, y0) and (x1, y1) define the lower left and upper right corners
of the rectangle, respectively."""
assert len(rect) == 4, 'Rectangles must contain 4 coordinates'
x0, y0, x1, y1 = rect
assert x0 < x1, 'Invalid X coordinates'
assert y0 < y1, 'Invalid Y coordinates'
dx = x1 - x0
dy = y1 - y0
if dx > dy:
scaled = dx / dy
upper_x, upper_y = 1.0, scaled
else:
scaled = dx / dy
upper_x, upper_y = scaled, 1.0
assert 0 < upper_x <= 1.0, 'Calculated upper X coordinate invalid'
assert 0 < upper_y <= 1.0, 'Calculated upper Y coordinate invalid'
return (0, 0, upper_x, upper_y)
The preconditions on lines 6, 8, and 9 catch invalid inputs:
ERROR
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-2-1b9cd8e18a1f> in <module>
----> 1 print(normalize_rectangle( (0.0, 1.0, 2.0) )) # missing the fourth coordinate
<ipython-input-1-c94cf5b065b9> in normalize_rectangle(rect)
4 (x0, y0) and (x1, y1) define the lower left and upper right corners
5 of the rectangle, respectively."""
----> 6 assert len(rect) == 4, 'Rectangles must contain 4 coordinates'
7 x0, y0, x1, y1 = rect
8 assert x0 < x1, 'Invalid X coordinates'
AssertionError: Rectangles must contain 4 coordinates
ERROR
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-3-325036405532> in <module>
----> 1 print(normalize_rectangle( (4.0, 2.0, 1.0, 5.0) )) # X axis inverted
<ipython-input-1-c94cf5b065b9> in normalize_rectangle(rect)
6 assert len(rect) == 4, 'Rectangles must contain 4 coordinates'
7 x0, y0, x1, y1 = rect
----> 8 assert x0 < x1, 'Invalid X coordinates'
9 assert y0 < y1, 'Invalid Y coordinates'
10
AssertionError: Invalid X coordinates
The post-conditions on lines 20 and 21 help us catch bugs by telling us when our calculations might have been incorrect. For example, if we normalize a rectangle that is taller than it is wide everything seems OK:
OUTPUT
(0, 0, 0.2, 1.0)
but if we normalize one that’s wider than it is tall, the assertion is triggered:
ERROR
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-5-8d4a48f1d068> in <module>
----> 1 print(normalize_rectangle( (0.0, 0.0, 5.0, 1.0) ))
<ipython-input-1-c94cf5b065b9> in normalize_rectangle(rect)
19
20 assert 0 < upper_x <= 1.0, 'Calculated upper X coordinate invalid'
---> 21 assert 0 < upper_y <= 1.0, 'Calculated upper Y coordinate invalid'
22
23 return (0, 0, upper_x, upper_y)
AssertionError: Calculated upper Y coordinate invalid
Re-reading our function, we realize that line 14 should divide
dy
by dx
rather than dx
by
dy
. In a Jupyter notebook, you can display line numbers by
typing Ctrl+M followed by L. If we had
left out the assertion at the end of the function, we would have created
and returned something that had the right shape as a valid answer, but
wasn’t. Detecting and debugging that would almost certainly have taken
more time in the long run than writing the assertion.
But assertions aren’t just about catching errors: they also help people understand programs. Each assertion gives the person reading the program a chance to check (consciously or otherwise) that their understanding matches what the code is doing.
Most good programmers follow two rules when adding assertions to their code. The first is, fail early, fail often. The greater the distance between when and where an error occurs and when it’s noticed, the harder the error will be to debug, so good code catches mistakes as early as possible.
The second rule is, turn bugs into assertions or tests. Whenever you fix a bug, write an assertion that catches the mistake should you make it again. If you made a mistake in a piece of code, the odds are good that you have made other mistakes nearby, or will make the same mistake (or a related one) the next time you change it. Writing assertions to check that you haven’t regressed (i.e., haven’t re-introduced an old problem) can save a lot of time in the long run, and helps to warn people who are reading the code (including your future self) that this bit is tricky.
Test-Driven Development
An assertion checks that something is true at a particular point in the program. The next step is to check the overall behavior of a piece of code, i.e., to make sure that it produces the right output when it’s given a particular input. For example, suppose we need to find where two or more time series overlap. The range of each time series is represented as a pair of numbers, which are the time the interval started and ended. The output is the largest range that they all include:
Most novice programmers would solve this problem like this:
- Write a function
range_overlap
. - Call it interactively on two or three different inputs.
- If it produces the wrong answer, fix the function and re-run that test.
This clearly works — after all, thousands of scientists are doing it right now — but there’s a better way:
- Write a short function for each test.
- Write a
range_overlap
function that should pass those tests. - If
range_overlap
produces any wrong answers, fix it and re-run the test functions.
Writing the tests before writing the function they exercise is called test-driven development (TDD). Its advocates believe it produces better code faster because:
- If people write tests after writing the thing to be tested, they are subject to confirmation bias, i.e., they subconsciously write tests to show that their code is correct, rather than to find errors.
- Writing tests helps programmers figure out what the function is actually supposed to do.
We start by defining an empty function
range_overlap
:
Here are three test statements for range_overlap
:
PYTHON
assert range_overlap([ (0.0, 1.0) ]) == (0.0, 1.0)
assert range_overlap([ (2.0, 3.0), (2.0, 4.0) ]) == (2.0, 3.0)
assert range_overlap([ (0.0, 1.0), (0.0, 2.0), (-1.0, 1.0) ]) == (0.0, 1.0)
ERROR
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-25-d8be150fbef6> in <module>()
----> 1 assert range_overlap([ (0.0, 1.0) ]) == (0.0, 1.0)
2 assert range_overlap([ (2.0, 3.0), (2.0, 4.0) ]) == (2.0, 3.0)
3 assert range_overlap([ (0.0, 1.0), (0.0, 2.0), (-1.0, 1.0) ]) == (0.0, 1.0)
AssertionError:
The error is actually reassuring: we haven’t implemented any logic
into range_overlap
yet, so if the tests passed, it would
indicate that we’ve written an entirely ineffective test.
And as a bonus of writing these tests, we’ve implicitly defined what our input and output look like: we expect a list of pairs as input, and produce a single pair as output.
Something important is missing, though. We don’t have any tests for the case where the ranges don’t overlap at all:
What should range_overlap
do in this case: fail with an
error message, produce a special value like (0.0, 0.0)
to
signal that there’s no overlap, or something else? Any actual
implementation of the function will do one of these things; writing the
tests first helps us figure out which is best before we’re
emotionally invested in whatever we happened to write before we realized
there was an issue.
And what about this case?
Do two segments that touch at their endpoints overlap or not?
Mathematicians usually say “yes”, but engineers usually say “no”. The
best answer is “whatever is most useful in the rest of our program”, but
again, any actual implementation of range_overlap
is going
to do something, and whatever it is ought to be consistent with
what it does when there’s no overlap at all.
Since we’re planning to use the range this function returns as the X axis in a time series chart, we decide that:
- every overlap has to have non-zero width, and
- we will return the special value
None
when there’s no overlap.
None
is built into Python, and means “nothing here”.
(Other languages often call the equivalent value null
or
nil
). With that decision made, we can finish writing our
last two tests:
PYTHON
assert range_overlap([ (0.0, 1.0), (5.0, 6.0) ]) == None
assert range_overlap([ (0.0, 1.0), (1.0, 2.0) ]) == None
ERROR
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-26-d877ef460ba2> in <module>()
----> 1 assert range_overlap([ (0.0, 1.0), (5.0, 6.0) ]) == None
2 assert range_overlap([ (0.0, 1.0), (1.0, 2.0) ]) == None
AssertionError:
Again, we get an error because we haven’t written our function, but we’re now ready to do so:
PYTHON
def range_overlap(ranges):
"""Return common overlap among a set of [left, right] ranges."""
max_left = 0.0
min_right = 1.0
for (left, right) in ranges:
max_left = max(max_left, left)
min_right = min(min_right, right)
return (max_left, min_right)
Take a moment to think about why we calculate the left endpoint of the overlap as the maximum of the input left endpoints, and the overlap right endpoint as the minimum of the input right endpoints. We’d now like to re-run our tests, but they’re scattered across three different cells. To make running them easier, let’s put them all in a function:
PYTHON
def test_range_overlap():
assert range_overlap([ (0.0, 1.0), (5.0, 6.0) ]) == None
assert range_overlap([ (0.0, 1.0), (1.0, 2.0) ]) == None
assert range_overlap([ (0.0, 1.0) ]) == (0.0, 1.0)
assert range_overlap([ (2.0, 3.0), (2.0, 4.0) ]) == (2.0, 3.0)
assert range_overlap([ (0.0, 1.0), (0.0, 2.0), (-1.0, 1.0) ]) == (0.0, 1.0)
assert range_overlap([]) == None
We can now test range_overlap
with a single function
call:
ERROR
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-29-cf9215c96457> in <module>()
----> 1 test_range_overlap()
<ipython-input-28-5d4cd6fd41d9> in test_range_overlap()
1 def test_range_overlap():
----> 2 assert range_overlap([ (0.0, 1.0), (5.0, 6.0) ]) == None
3 assert range_overlap([ (0.0, 1.0), (1.0, 2.0) ]) == None
4 assert range_overlap([ (0.0, 1.0) ]) == (0.0, 1.0)
5 assert range_overlap([ (2.0, 3.0), (2.0, 4.0) ]) == (2.0, 3.0)
AssertionError:
The first test that was supposed to produce None
fails,
so we know something is wrong with our function. We don’t know
whether the other tests passed or failed because Python halted the
program as soon as it spotted the first error. Still, some information
is better than none, and if we trace the behavior of the function with
that input, we realize that we’re initializing max_left
and
min_right
to 0.0 and 1.0 respectively, regardless of the
input values. This violates another important rule of programming:
always initialize from data.
Pre- and Post-Conditions
Suppose you are writing a function called average
that
calculates the average of the numbers in a list. What pre-conditions and
post-conditions would you write for it? Compare your answer to your
neighbor’s: can you think of a function that will pass your tests but
not his/hers or vice versa?
Testing Assertions
Given a sequence of a number of cars, the function
get_total_cars
returns the total number of cars.
OUTPUT
10
OUTPUT
ValueError: invalid literal for int() with base 10: 'a'
Explain in words what the assertions in this function check, and for each one, give an example of input that will make that assertion fail.
- The first assertion checks that the input sequence
values
is not empty. An empty sequence such as[]
will make it fail. - The second assertion checks that each value in the list can be
turned into an integer. Input such as
[1, 2, 'c', 3]
will make it fail. - The third assertion checks that the total of the list is greater
than 0. Input such as
[-10, 2, 3]
will make it fail.
Key Points
- Program defensively, i.e., assume that errors are going to arise, and write code to detect them when they do.
- Put assertions in programs to check their state as they run, and to help readers understand how those programs are supposed to work.
- Use preconditions to check that the inputs to a function are safe to use.
- Use postconditions to check that the output from a function is safe to use.
- Write tests before writing code in order to help determine exactly what that code is supposed to do.
Content from Debugging
Last updated on 2024-02-23 | Edit this page
Estimated time: 50 minutes
Overview
Questions
- How can I debug my program?
Objectives
- Debug code containing an error systematically.
- Identify ways of making code less error-prone and more easily tested.
Once testing has uncovered problems, the next step is to fix them. Many novices do this by making more-or-less random changes to their code until it seems to produce the right answer, but that’s very inefficient (and the result is usually only correct for the one case they’re testing). The more experienced a programmer is, the more systematically they debug, and most follow some variation on the rules explained below.
Know What It’s Supposed to Do
The first step in debugging something is to know what it’s supposed to do. “My program doesn’t work” isn’t good enough: in order to diagnose and fix problems, we need to be able to tell correct output from incorrect. If we can write a test case for the failing case — i.e., if we can assert that with these inputs, the function should produce that result — then we’re ready to start debugging. If we can’t, then we need to figure out how we’re going to know when we’ve fixed things.
But writing test cases for scientific software is frequently harder than writing test cases for commercial applications, because if we knew what the output of the scientific code was supposed to be, we wouldn’t be running the software: we’d be writing up our results and moving on to the next program. In practice, scientists tend to do the following:
Test with simplified data. Before doing statistics on a real data set, we should try calculating statistics for a single record, for two identical records, for two records whose values are one step apart, or for some other case where we can calculate the right answer by hand.
Test a simplified case. If our program is supposed to simulate magnetic eddies in rapidly-rotating blobs of supercooled helium, our first test should be a blob of helium that isn’t rotating, and isn’t being subjected to any external electromagnetic fields. Similarly, if we’re looking at the effects of climate change on speciation, our first test should hold temperature, precipitation, and other factors constant.
Compare to an oracle. A test oracle is something whose results are trusted, such as experimental data, an older program, or a human expert. We use test oracles to determine if our new program produces the correct results. If we have a test oracle, we should store its output for particular cases so that we can compare it with our new results as often as we like without re-running that program.
Check conservation laws. Mass, energy, and other quantities are conserved in physical systems, so they should be in programs as well. Similarly, if we are analyzing patient data, the number of records should either stay the same or decrease as we move from one analysis to the next (since we might throw away outliers or records with missing values). If “new” patients start appearing out of nowhere as we move through our pipeline, it’s probably a sign that something is wrong.
Visualize. Data analysts frequently use simple visualizations to check both the science they’re doing and the correctness of their code (just as we did in the opening lesson of this tutorial). This should only be used for debugging as a last resort, though, since it’s very hard to compare two visualizations automatically.
Make It Fail Every Time
We can only debug something when it fails, so the second step is always to find a test case that makes it fail every time. The “every time” part is important because few things are more frustrating than debugging an intermittent problem: if we have to call a function a dozen times to get a single failure, the odds are good that we’ll scroll past the failure when it actually occurs.
As part of this, it’s always important to check that our code is “plugged in”, i.e., that we’re actually exercising the problem that we think we are. Every programmer has spent hours chasing a bug, only to realize that they were actually calling their code on the wrong data set or with the wrong configuration parameters, or are using the wrong version of the software entirely. Mistakes like these are particularly likely to happen when we’re tired, frustrated, and up against a deadline, which is one of the reasons late-night (or overnight) coding sessions are almost never worthwhile.
Make It Fail Fast
If it takes 20 minutes for the bug to surface, we can only do three experiments an hour. This means that we’ll get less data in more time and that we’re more likely to be distracted by other things as we wait for our program to fail, which means the time we are spending on the problem is less focused. It’s therefore critical to make it fail fast.
As well as making the program fail fast in time, we want to make it fail fast in space, i.e., we want to localize the failure to the smallest possible region of code:
The smaller the gap between cause and effect, the easier the connection is to find. Many programmers therefore use a divide and conquer strategy to find bugs, i.e., if the output of a function is wrong, they check whether things are OK in the middle, then concentrate on either the first or second half, and so on.
N things can interact in N! different ways, so every line of code that isn’t run as part of a test means more than one thing we don’t need to worry about.
Change One Thing at a Time, For a Reason
Replacing random chunks of code is unlikely to do much good. (After all, if you got it wrong the first time, you’ll probably get it wrong the second and third as well.) Good programmers therefore change one thing at a time, for a reason. They are either trying to gather more information (“is the bug still there if we change the order of the loops?”) or test a fix (“can we make the bug go away by sorting our data before processing it?”).
Every time we make a change, however small, we should re-run our tests immediately, because the more things we change at once, the harder it is to know what’s responsible for what (those N! interactions again). And we should re-run all of our tests: more than half of fixes made to code introduce (or re-introduce) bugs, so re-running all of our tests tells us whether we have regressed.
Keep Track of What You’ve Done
Good scientists keep track of what they’ve done so that they can reproduce their work, and so that they don’t waste time repeating the same experiments or running ones whose results won’t be interesting. Similarly, debugging works best when we keep track of what we’ve done and how well it worked. If we find ourselves asking, “Did left followed by right with an odd number of lines cause the crash? Or was it right followed by left? Or was I using an even number of lines?” then it’s time to step away from the computer, take a deep breath, and start working more systematically.
Records are particularly useful when the time comes to ask for help. People are more likely to listen to us when we can explain clearly what we did, and we’re better able to give them the information they need to be useful.
Version Control Revisited
Version control is often used to reset software to a known state during debugging, and to explore recent changes to code that might be responsible for bugs. In particular, most version control systems (e.g. git, Mercurial) have:
- a
blame
command that shows who last changed each line of a file; - a
bisect
command that helps with finding the commit that introduced an issue.
Be Humble
And speaking of help: if we can’t find a bug in 10 minutes, we should be humble and ask for help. Explaining the problem to someone else is often useful, since hearing what we’re thinking helps us spot inconsistencies and hidden assumptions. If you don’t have someone nearby to share your problem description with, get a rubber duck!
Asking for help also helps alleviate confirmation bias. If we have just spent an hour writing a complicated program, we want it to work, so we’re likely to keep telling ourselves why it should, rather than searching for the reason it doesn’t. People who aren’t emotionally invested in the code can be more objective, which is why they’re often able to spot the simple mistakes we have overlooked.
Part of being humble is learning from our mistakes. Programmers tend to get the same things wrong over and over: either they don’t understand the language and libraries they’re working with, or their model of how things work is wrong. In either case, taking note of why the error occurred and checking for it next time quickly turns into not making the mistake at all.
And that is what makes us most productive in the long run. As the saying goes, A week of hard work can sometimes save you an hour of thought. If we train ourselves to avoid making some kinds of mistakes, to break our code into modular, testable chunks, and to turn every assumption (or mistake) into an assertion, it will actually take us less time to produce working programs, not more.
Debug With a Neighbor
Take a function that you have written today, and introduce a tricky bug. Your function should still run, but will give the wrong output. Switch seats with your neighbor and attempt to debug the bug that they introduced into their function. Which of the principles discussed above did you find helpful?
Not Supposed to be the Same
You are assisting a researcher with Python code that computes the Body Mass Index (BMI) of patients. The researcher is concerned because all patients seemingly have unusual and identical BMIs, despite having different physiques. BMI is calculated as weight in kilograms divided by the square of height in metres.
Use the debugging principles in this exercise and locate problems with the code. What suggestions would you give the researcher for ensuring any later changes they make work correctly? What bugs do you spot?
PYTHON
patients = [[70, 1.8], [80, 1.9], [150, 1.7]]
def calculate_bmi(weight, height):
return weight / (height ** 2)
for patient in patients:
weight, height = patients[0]
bmi = calculate_bmi(height, weight)
print("Patient's BMI is:", bmi)
OUTPUT
Patient's BMI is: 0.000367
Patient's BMI is: 0.000367
Patient's BMI is: 0.000367
Suggestions for debugging
- Add printing statement in the
calculate_bmi
function, likeprint('weight:', weight, 'height:', height)
, to make clear that what the BMI is based on. - Change
print("Patient's BMI is: %f" % bmi)
toprint("Patient's BMI (weight: %f, height: %f) is: %f" % (weight, height, bmi))
, in order to be able to distinguish bugs in the function from bugs in the loop.
Key Points
- Know what code is supposed to do before trying to debug it.
- Make it fail every time.
- Make it fail fast.
- Change one thing at a time, and for a reason.
- Keep track of what you’ve done.
- Be humble.
Content from Command-Line Programs
Last updated on 2024-02-23 | Edit this page
Estimated time: 30 minutes
Overview
Questions
- How can I write Python programs that will work like Unix command-line tools?
Objectives
- Use the values of command-line arguments in a program.
- Handle flags and files separately in a command-line program.
- Read data from standard input in a program so that it can be used in a pipeline.
The Jupyter Notebook and other interactive tools are great for prototyping code and exploring data, but sooner or later we will want to use our program in a pipeline or run it in a shell script to process thousands of data files. In order to do that in an efficient way, we need to make our programs work like other Unix command-line tools. For example, we may want a program that reads a dataset and prints the average GDP per country.
Switching to Shell Commands
In this lesson we are switching from typing commands in a Python
interpreter to typing commands in a shell terminal window (such as
bash). When you see a $
in front of a command that tells
you to run that command in the shell rather than the Python
interpreter.
This program does exactly what we want - it prints the average GDP per country for a given file.
OUTPUT
5937.029526
36126.4927
33692.60508
...
37506.41907
8458.276384
33203.26128
We might also want to look at the minimum of the first four lines
or the maximum GDP in several files one after another:
Our scripts should do the following:
- If no filename is given on the command line, read data from standard input.
- If one or more filenames are given, read data from them and report statistics for each file separately.
- Use the
--min
,--mean
, or--max
flag to determine what statistic to print.
To make this work, we need to know how to handle command-line arguments in a program, and understand how to handle standard input. We’ll tackle these questions in turn below.
Command-Line Arguments
We are going to create a file with our python code in, then use the
bash shell to run the code. Using the text editor of your choice, save
the following in a text file called sys_version.py
:
The first line imports a library called sys
, which is
short for “system”. It defines values such as sys.version
,
which describes which version of Python we are running. We can run this
script from the command line like this:
OUTPUT
version is 3.11.3 (main, Apr 5 2023, 15:52:25) [GCC 12.2.1 20230201]
Create another file called argv_list.py
and save the
following text to it.
The strange name argv
stands for “argument values”.
Whenever Python runs a program, it takes all of the values given on the
command line and puts them in the list sys.argv
so that the
program can determine what they were. If we run this program with no
arguments:
OUTPUT
sys.argv is ['argv_list.py']
the only thing in the list is the full path to our script, which is
always sys.argv[0]
. If we run it with a few arguments,
however:
OUTPUT
sys.argv is ['argv_list.py', 'first', 'second', 'third']
then Python adds each of those arguments to that magic list.
With this in hand, let’s build a version of readings.py
that always prints the per-country mean of a single data file. The first
step is to write a function that outlines our implementation, and a
placeholder for the function that does the actual work. By convention
this function is usually called main
, though we can call it
whatever we want:
PYTHON
import sys
import pandas as pd
def main():
script = sys.argv[0]
filename = sys.argv[1]
data = pd.read_csv(filename, index_col='country')
for row_mean in data.mean(axis='columns'):
print(row_mean)
This function gets the name of the script from
sys.argv[0]
, because that’s where it’s always put, and the
name of the file to process from sys.argv[1]
. Here’s a
simple test:
There is no output because we have defined a function, but haven’t
actually called it. Let’s add a call to main
:
PYTHON
import sys
import pandas as pd
def main():
script = sys.argv[0]
filename = sys.argv[1]
data = pd.read_csv(filename, index_col='country')
for row_mean in data.mean(axis='columns'):
print(row_mean)
if __name__ == '__main__':
main()
and run that:
OUTPUT
9980.595634166664
17262.6228125
Running Versus Importing
Running a Python script in bash is very similar to importing that file in Python. The biggest difference is that we don’t expect anything to happen when we import a file, whereas when running a script, we expect to see some output printed to the console.
In order for a Python script to work as expected when imported or when run as a script, we typically put the part of the script that produces output in the following if statement:
When you import a Python file, __name__
is the name of
that file (e.g., when importing readings.py
,
__name__
is 'readings'
). However, when running
a script in bash, __name__
is always set to
'__main__'
in that script so that you can determine if the
file is being imported or run as a script.
The Right Way to Do It
If our programs can take complex parameters or multiple filenames, we
shouldn’t handle sys.argv
directly. Instead, we should use
Python’s argparse
library, which handles common cases in a
systematic way, and also makes it easy for us to provide sensible error
messages for our users. We will not cover this module in this lesson but
you can go to Tshepang Lekhonkhobe’s Argparse
tutorial that is part of Python’s Official Documentation.
Handling Multiple Files
The next step is to teach our program how to handle multiple files. Since 60 lines of output per file is a lot to page through, we’ll start by using three smaller files:
OUTPUT
small_gdp_discworld.csv small_gdp_middle-earth.csv
OUTPUT
country,800,1000,1200,1400,1600,1800
Rivendell, 100, 100, 200, 200, 300, 300
Mordor, 20, 40, 60, 80, 100, 300
Hobbiton,10, 10, 10, 10, 10, 10
Moria, 150, 250, 100, 50, 50, 0
OUTPUT
200.0
100.0
10.0
100.0
Using small data files as input also allows us to check our results more easily: here, for example, we can see that our program is calculating the mean correctly for each line, whereas we were really taking it on faith before. This is yet another rule of programming: test the simple things first.
We want our program to process each file separately, so we need a
loop that executes once for each filename. If we specify the files on
the command line, the filenames will be in sys.argv
, but we
need to be careful: sys.argv[0]
will always be the name of
our script, rather than the name of a file. We also need to handle an
unknown number of filenames, since our program could be run for any
number of files.
The solution to both problems is to loop over the contents of
sys.argv[1:]
. The ‘1’ tells Python to start the slice at
location 1, so the program’s name isn’t included; since we’ve left off
the upper bound, the slice runs to the end of the list, and includes all
the filenames. Here’s our changed program
readings_03.py
:
PYTHON
import sys
import pandas as pd
def main():
script = sys.argv[0]
for filename in sys.argv[1:]:
data = pd.read_csv(filename, index_col='country')
for row_mean in data.mean(axis='columns'):
print(row_mean)
if __name__ == '__main__':
main()
and here it is in action:
OUTPUT
0.0
35.0
15.0
200.0
100.0
10.0
100.0
The Right Way to Do It
At this point, we have created three versions of our script called
readings_01.py
, readings_02.py
, and
readings_03.py
. We wouldn’t do this in real life: instead,
we would have one file called readings.py
that we committed
to version control every time we got an enhancement working. For
teaching, though, we need all the successive versions side by side.
Handling Command-Line Flags
The next step is to teach our program to pay attention to the
--min
, --mean
, and --max
flags.
These always appear before the names of the files, so we could do
this:
PYTHON
import sys
import pandas as pd
def main():
script = sys.argv[0]
action = sys.argv[1]
filenames = sys.argv[2:]
for filename in filenames:
data = pd.read_csv(filename, index_col='country')
if action == '--min':
values = data.min(axis='columns')
elif action == '--mean':
values = data.mean(axis='columns')
elif action == '--max':
values = data.max(axis='columns')
for val in values:
print(val)
if __name__ == '__main__':
main()
This works:
OUTPUT
0
60
20
but there are several things wrong with it:
main
is too large to read comfortably.If we do not specify at least two additional arguments on the command-line, one for the flag and one for the filename, but only one, the program will not throw an exception but will run. It assumes that the file list is empty, as
sys.argv[1]
will be considered theaction
, even if it is a filename. Silent failures like this are always hard to debug.The program should check if the submitted
action
is one of the three recognized flags.
This version pulls the processing of each file out of the loop into a
function of its own. It also checks that action
is one of
the allowed flags before doing any processing, so that the program fails
fast:
PYTHON
import sys
import pandas as pd
def main():
script = sys.argv[0]
action = sys.argv[1]
filenames = sys.argv[2:]
assert action in ['--min', '--mean', '--max'], \
'Action is not one of --min, --mean, or --max: ' + action
for filename in filenames:
process(filename, action)
def process(filename, action):
data = pd.read_csv(filename, index_col='country')
if action == '--min':
values = data.min(axis='columns')
elif action == '--mean':
values = data.mean(axis='columns')
elif action == '--max':
values = data.max(axis='columns')
for val in values:
print(val)
if __name__ == '__main__':
main()
This is four lines longer than its predecessor, but broken into more digestible chunks of 8 and 12 lines.
Handling Standard Input
The next thing our program has to do is read data from standard input
if no filenames are given so that we can put it in a pipeline, redirect
input to it, and so on. Let’s experiment in another script called
count_stdin.py
:
PYTHON
import sys
count = 0
for line in sys.stdin:
count += 1
print(count, 'lines in standard input')
This little program reads lines from a special “file” called
sys.stdin
, which is automatically connected to the
program’s standard input. We don’t have to open it — Python and the
operating system take care of that when the program starts up — but we
can do almost anything with it that we could do to a regular file. Let’s
try running it as if it were a regular command-line program:
OUTPUT
5 lines in standard input
A common mistake is to try to run something that reads from standard input like this:
i.e., to forget the <
character that redirects the
file to standard input. In this case, there’s nothing in standard input,
so the program waits at the start of the loop for someone to type
something on the keyboard. Since there’s no way for us to do this, our
program is stuck, and we have to halt it using the
Interrupt
option from the Kernel
menu in the
Notebook.
We now need to rewrite the program so that it loads data from
sys.stdin
if no filenames are provided. Luckily,
pandas.read_csv
can handle either a filename or an open
file as its first parameter, so we don’t actually need to change
process
. Only main
changes:
PYTHON
import sys
import pandas as pd
def main():
script = sys.argv[0]
action = sys.argv[1]
filenames = sys.argv[2:]
assert action in ['--min', '--mean', '--max'], (
'Action is not one of --min, --mean, or --max: ' + action)
if len(filenames) == 0:
process(sys.stdin, action)
else:
for filename in filenames:
process(filename, action)
def process(filename, action):
data = pd.read_csv(filename, index_col='country')
if action == '--min':
values = data.min(axis='columns')
elif action == '--mean':
values = data.mean(axis='columns')
elif action == '--max':
values = data.max(axis='columns')
for val in values:
print(val)
if __name__ == '__main__':
main()
Let’s try it out:
OUTPUT
0
60
20
That’s better. In fact, that’s done: the program now does everything we set out to do.
PYTHON
import sys
def main():
assert len(sys.argv) == 4, 'Need exactly 3 arguments'
operator = sys.argv[1]
assert operator in ['--add', '--subtract', '--multiply', '--divide'], \
'Operator is not one of --add, --subtract, --multiply, or --divide: bailing out'
try:
operand1, operand2 = float(sys.argv[2]), float(sys.argv[3])
except ValueError:
print('cannot convert input to a number: bailing out')
return
do_arithmetic(operand1, operator, operand2)
def do_arithmetic(operand1, operator, operand2):
if operator == 'add':
value = operand1 + operand2
elif operator == 'subtract':
value = operand1 - operand2
elif operator == 'multiply':
value = operand1 * operand2
elif operator == 'divide':
value = operand1 / operand2
print(value)
main()
Finding Particular Files
Using the glob
module introduced earlier, write a simple version of
ls
that shows files in the current directory with a
particular suffix. A call to this script should look like this:
OUTPUT
left.py
right.py
zero.py
PYTHON
import sys
import glob
def main():
"""prints names of all files with sys.argv as suffix"""
assert len(sys.argv) >= 2, 'Argument list cannot be empty'
suffix = sys.argv[1] # NB: behaviour is not as you'd expect if sys.argv[1] is *
glob_input = '*.' + suffix # construct the input
glob_output = sorted(glob.glob(glob_input)) # call the glob function
for item in glob_output: # print the output
print(item)
return
main()
Changing Flags
Rewrite readings.py
so that it uses -n
,
-m
, and -x
instead of --min
,
--mean
, and --max
respectively. Is the code
easier to read? Is the program easier to understand?
PYTHON
# this is code/readings_07.py
import sys
import pandas as pd
def main():
script = sys.argv[0]
action = sys.argv[1]
filenames = sys.argv[2:]
assert action in ['-n', '-m', '-x'], (
'Action is not one of -n, -m, or -x: ' + action)
if len(filenames) == 0:
process(sys.stdin, action)
else:
for filename in filenames:
process(filename, action)
def process(filename, action):
data = pd.read_csv(filename, index_col='country')
if action == '-n':
values = data.min(axis='columns')
elif action == '-m':
values = data.mean(axis='columns')
elif action == '-x':
values = data.max(axis='columns')
for val in values:
print(val)
if __name__ == '__main__':
main()
Adding a Help Message
Separately, modify readings.py
so that if no parameters
are given (i.e., no action is specified and no filenames are given), it
prints a message explaining how it should be used.
PYTHON
# this is code/readings_08.py
import sys
import pandas as pd
def main():
script = sys.argv[0]
if len(sys.argv) == 1: # no arguments, so print help message
print("Usage: python readings_08.py action filenames\n"
"Action:\n"
" Must be one of --min, --mean, or --max.\n"
"Filenames:\n"
" If blank, input is taken from standard input (stdin).\n"
" Otherwise, each filename in the list of arguments is processed in turn.")
return
action = sys.argv[1]
filenames = sys.argv[2:]
assert action in ['--min', '--mean', '--max'], (
'Action is not one of --min, --mean, or --max: ' + action)
if len(filenames) == 0:
process(sys.stdin, action)
else:
for filename in filenames:
process(filename, action)
def process(filename, action):
data = pd.read_csv(filename, index_col='country')
if action == '--min':
values = data.min(axis='columns')
elif action == '--mean':
values = data.mean(axis='columns')
elif action == '--max':
values = data.max(axis='columns')
for val in values:
print(val)
if __name__ == '__main__':
main()
Adding a Default Action
Separately, modify readings.py
so that if no action is
given it displays the means of the data.
PYTHON
# this is code/readings_09.py
import sys
import pandas as pd
def main():
script = sys.argv[0]
action = sys.argv[1]
if action not in ['--min', '--mean', '--max']: # if no action given
action = '--mean' # set a default action, that being mean
# start the filenames one place earlier in the argv list
filenames = sys.argv[1:]
else:
filenames = sys.argv[2:]
if len(filenames) == 0:
process(sys.stdin, action)
else:
for filename in filenames:
process(filename, action)
def process(filename, action):
data = pd.read_csv(filename, index_col='country')
if action == '--min':
values = data.min(axis='columns')
elif action == '--mean':
values = data.mean(axis='columns')
elif action == '--max':
values = data.max(axis='columns')
for val in values:
print(val)
if __name__ == '__main__':
main()
A File-Checker
Write a program called check.py
that takes the names of
one or more GDP-like CSV data files as arguments and checks that all the
files have the same number of rows and columns. What is the best way to
test your program?
PYTHON
import sys
import pandas as pd
def main():
script = sys.argv[0]
filenames = sys.argv[1:]
if len(filenames) <= 1: # nothing to check
print('Only 1 file specified on input')
else:
nrow0, ncol0 = row_col_count(filenames[0])
print('First file %s: %d rows and %d columns' % (
filenames[0], nrow0, ncol0))
for filename in filenames[1:]:
nrow, ncol = row_col_count(filename)
if nrow != nrow0 or ncol != ncol0:
print('File %s does not check: %d rows and %d columns'
% (filename, nrow, ncol))
else:
print('File %s checks' % filename)
return
def row_col_count(filename):
try:
nrow, ncol = pd.read_csv(filename, index_col='country').shape
except ValueError:
# This occurs if the file doesn't have same number of rows and columns,
# or if it has non-numeric content
nrow, ncol = (0, 0)
return nrow, ncol
if __name__ == '__main__':
main()
Counting Lines
Write a program called line_count.py
that works like the
Unix wc
command:
- If no filenames are given, it reports the number of lines in standard input.
- If one or more filenames are given, it reports the number of lines in each, followed by the total number of lines.
PYTHON
import sys
def main():
"""print each input filename and the number of lines in it,
and print the sum of the number of lines"""
filenames = sys.argv[1:]
sum_nlines = 0 #initialize counting variable
if len(filenames) == 0: # no filenames, just stdin
sum_nlines = count_file_like(sys.stdin)
print('stdin: %d' % sum_nlines)
else:
for filename in filenames:
nlines = count_file(filename)
print('%s %d' % (filename, nlines))
sum_nlines += nlines
print('total: %d' % sum_nlines)
def count_file(filename):
"""count the number of lines in a file"""
f = open(filename,'r')
nlines = len(f.readlines())
f.close()
return(nlines)
def count_file_like(file_like):
"""count the number of lines in a file-like object (eg stdin)"""
n = 0
for line in file_like:
n = n+1
return n
main()
Generate an Error Message
Write a program called check_arguments.py
that prints
usage then exits the program if no arguments are provided. (Hint: You
can use sys.exit()
to exit the program.)
OUTPUT
usage: python check_argument.py filename.txt
OUTPUT
Thanks for specifying arguments!
Key Points
- The
sys
library connects a Python program to the system it is running on. - The list
sys.argv
contains the command-line arguments that a program was run with. - Avoid silent failures.
- The pseudo-file
sys.stdin
connects to a program’s standard input.