作者
A.M. Kuchling
This article explains the new features in Python 2.5. The final release of Python 2.5 is scheduled for August 2006; PEP 356 describes the planned release schedule. Python 2.5 was released on September 19, 2006.
The changes in Python 2.5 are an interesting mix of language and library improvements. The library enhancements will be more important to Python’s user community, I think, because several widely useful packages were added. New modules include ElementTree for XML processing (xml.etree
), the SQLite database module (sqlite
), and the ctypes module for calling C functions.
The language changes are of middling significance. Some pleasant new features were added, but most of them aren’t features that you’ll use every day. Conditional expressions were finally added to the language using a novel syntax; see section PEP 308: 条件表达式. The new ‘with‘ statement will make writing cleanup code easier (section PEP 343: “with” 语句). Values can now be passed into generators (section PEP 342: 生成器的新特性). Imports are now visible as either absolute or relative (section PEP 328: 绝对导入和相对导入). Some corner cases of exception handling are handled better (section PEP 341: 统一 try/except/finally). All these improvements are worthwhile, but they’re improvements to one specific language feature or another; none of them are broad modifications to Python’s semantics.
As well as the language and library additions, other improvements and bugfixes were made throughout the source tree. A search through the SVN change logs finds there were 353 patches applied and 458 bugs fixed between Python 2.4 and 2.5. (Both figures are likely to be underestimates.)
This article doesn’t try to be a complete specification of the new features; instead changes are briefly introduced using helpful examples. For full details, you should always refer to the documentation for Python 2.5 at https://docs.python.org. If you want to understand the complete implementation and design rationale, refer to the PEP for a particular new feature.
Comments, suggestions, and error reports for this document are welcome; please e-mail them to the author or open a bug in the Python bug tracker.
For a long time, people have been requesting a way to write conditional expressions, which are expressions that return value A or value B depending on whether a Boolean value is true or false. A conditional expression lets you write a single assignment statement that has the same effect as the following:
if condition:
x = true_value
else:
x = false_value
There have been endless tedious discussions of syntax on both python-dev and comp.lang.python. A vote was even held that found the majority of voters wanted conditional expressions in some form, but there was no syntax that was preferred by a clear majority. Candidates included C’s cond ? true_v : false_v
, if cond then true_v else false_v
, and 16 other variations.
Guido van Rossum 最终选择了一种令人惊讶的语法:
x = true_value if condition else false_value
Evaluation is still lazy as in existing Boolean expressions, so the order of evaluation jumps around a bit. The condition expression in the middle is evaluated first, and the true_value expression is evaluated only if the condition was true. Similarly, the false_value expression is only evaluated when the condition is false.
This syntax may seem strange and backwards; why does the condition go in the middle of the expression, and not in the front as in C’s c ? x : y
? The decision was checked by applying the new syntax to the modules in the standard library and seeing how the resulting code read. In many cases where a conditional expression is used, one value seems to be the ‘common case’ and one value is an ‘exceptional case’, used only on rarer occasions when the condition isn’t met. The conditional syntax makes this pattern a bit more obvious:
contents = ((doc + '\n') if doc else '')
I read the above statement as meaning “here contents is usually assigned a value of doc+'\n'
; sometimes doc is empty, in which special case an empty string is returned.” I doubt I will use conditional expressions very often where there isn’t a clear common and uncommon case.
There was some discussion of whether the language should require surrounding conditional expressions with parentheses. The decision was made to not require parentheses in the Python language’s grammar, but as a matter of style I think you should always use them. Consider these two statements:
# First version -- no parens
level = 1 if logging else 0
# Second version -- with parens
level = (1 if logging else 0)
In the first version, I think a reader’s eye might group the statement into ‘level = 1’, ‘if logging’, ‘else 0’, and think that the condition decides whether the assignment to level is performed. The second version reads better, in my opinion, because it makes it clear that the assignment is always performed and the choice is being made between two values.
Another reason for including the brackets: a few odd combinations of list comprehensions and lambdas could look like incorrect conditional expressions. See PEP 308 for some examples. If you put parentheses around your conditional expressions, you won’t run into this case.
参见
PEP 308 - 条件表达式
PEP 由 Guido van Rossum 和 Raymond D 撰写,由 Thomas Wouters 实现。
functools 模块旨在包含用于函数式编程风格的工具。
One useful tool in this module is the partial()
function. For programs written in a functional style, you’ll sometimes want to construct variants of existing functions that have some of the parameters filled in. Consider a Python function f(a, b, c)
; you could create a new function g(b, c)
that was equivalent to f(1, b, c)
. This is called “partial function application”.
partial()
takes the arguments (function, arg1, arg2, ... kwarg1=value1, kwarg2=value2)
. The resulting object is callable, so you can just call it to invoke function with the filled-in arguments.
这里有一个很小但很现实的例子:
import functools
def log (message, subsystem):
"Write the contents of 'message' to the specified subsystem."
print '%s: %s' % (subsystem, message)
...
server_log = functools.partial(log, subsystem='server')
server_log('Unable to open socket')
Here’s another example, from a program that uses PyGTK. Here a context-sensitive pop-up menu is being constructed dynamically. The callback provided for the menu option is a partially applied version of the open_item()
method, where the first argument has been provided.
...
class Application:
def open_item(self, path):
...
def init (self):
open_func = functools.partial(self.open_item, item_path)
popup_menu.append( ("Open", open_func, 1) )
Another function in the functools module is the update_wrapper(wrapper, wrapped)
function that helps you write well-behaved decorators. update_wrapper()
copies the name, module, and docstring attribute to a wrapper function so that tracebacks inside the wrapped function are easier to understand. For example, you might write:
def my_decorator(f):
def wrapper(*args, **kwds):
print 'Calling decorated function'
return f(*args, **kwds)
functools.update_wrapper(wrapper, f)
return wrapper
wraps()
is a decorator that can be used inside your own decorators to copy the wrapped function’s information. An alternate version of the previous example would be:
def my_decorator(f):
@functools.wraps(f)
def wrapper(*args, **kwds):
print 'Calling decorated function'
return f(*args, **kwds)
return wrapper
参见
PEP 309 - 部分函数应用
PEP由 Peter Harris 提出并撰写;由 Hye-Shik Chang 和 Nick Coghlan 实现,并由 Raymond Hettinger 适配。
Some simple dependency support was added to Distutils. The setup()
function now has requires
, provides
, and obsoletes
keyword parameters. When you build a source distribution using the sdist
command, the dependency information will be recorded in the PKG-INFO
file.
Another new keyword parameter is download_url
, which should be set to a URL for the package’s source code. This means it’s now possible to look up an entry in the package index, determine the dependencies for a package, and download the required packages.
VERSION = '1.0'
setup(name='PyPackage',
version=VERSION,
requires=['numarray', 'zlib (>=1.1.4)'],
obsoletes=['OldPackage']
download_url=('http://www.example.com/pypackage/dist/pkg-%s.tar.gz'
% VERSION),
)
Another new enhancement to the Python package index at https://pypi.org is storing source and binary archives for a package. The new upload Distutils command will upload a package to the repository.
Before a package can be uploaded, you must be able to build a distribution using the sdist Distutils command. Once that works, you can run python setup.py upload
to add your package to the PyPI archive. Optionally you can GPG-sign the package by supplying the --sign
and --identity
options.
包上传操作由 Martin von Löwis 和 Richard Jones 实现。
参见
PEP 314 - Python软件包的元数据 v1.1
PEP 由 A.M. Kuchling, Richard Jones 和 Fred Drake 提出并撰写,由 Richard Jones 和 Fred Drake 实现
The simpler part of PEP 328 was implemented in Python 2.4: parentheses could now be used to enclose the names imported from a module using the from ... import ...
statement, making it easier to import many different names.
The more complicated part has been implemented in Python 2.5: importing a module can be specified to use absolute or package-relative imports. The plan is to move toward making absolute imports the default in future versions of Python.
Let’s say you have a package directory like this:
pkg/
pkg/__init__.py
pkg/main.py
pkg/string.py
This defines a package named pkg
containing the pkg.main
and pkg.string
submodules.
Consider the code in the main.py
module. What happens if it executes the statement import string
? In Python 2.4 and earlier, it will first look in the package’s directory to perform a relative import, finds pkg/string.py
, imports the contents of that file as the pkg.string
module, and that module is bound to the name string
in the pkg.main
module’s namespace.
That’s fine if pkg.string
was what you wanted. But what if you wanted Python’s standard string module? There’s no clean way to ignore pkg.string
and look for the standard module; generally you had to look at the contents of sys.modules
, which is slightly unclean. Holger Krekel’s py.std
package provides a tidier way to perform imports from the standard library, import py; py.std.string.join()
, but that package isn’t available on all Python installations.
Reading code which relies on relative imports is also less clear, because a reader may be confused about which module, string or pkg.string
, is intended to be used. Python users soon learned not to duplicate the names of standard library modules in the names of their packages’ submodules, but you can’t protect against having your submodule’s name being used for a new module added in a future version of Python.
In Python 2.5, you can switch import‘s behaviour to absolute imports using a from __future__ import absolute_import
directive. This absolute-import behaviour will become the default in a future version (probably Python 2.7). Once absolute imports are the default, import string
will always find the standard library’s version. It’s suggested that users should begin using absolute imports as much as possible, so it’s preferable to begin writing from pkg import string
in your code.
Relative imports are still possible by adding a leading period to the module name when using the from ... import
form:
# Import names from pkg.string
from .string import name1, name2
# Import pkg.string
from . import string
This imports the string module relative to the current package, so in pkg.main
this will import name1 and name2 from pkg.string
. Additional leading periods perform the relative import starting from the parent of the current package. For example, code in the A.B.C
module can do:
from . import D # Imports A.B.D
from .. import E # Imports A.E
from ..F import G # Imports A.F.G
Leading periods cannot be used with the import modname
form of the import statement, only the from ... import
form.
参见
PEP 328 - 导入:多行和绝对/相对导入
PEP 由 Aahz 撰写,由 Thomas Wouters 实现。
https://pylib.readthedocs.io/
由 Holger Krekel 编写 py 库,其中包含 py.std
包。
The -m switch added in Python 2.4 to execute a module as a script gained a few more abilities. Instead of being implemented in C code inside the Python interpreter, the switch now uses an implementation in a new module, runpy.
The runpy module implements a more sophisticated import mechanism so that it’s now possible to run modules in a package such as pychecker.checker
. The module also supports alternative import mechanisms such as the zipimport module. This means you can add a .zip archive’s path to sys.path
and then use the -m switch to execute code from the archive.
参见
PEP 338 - 将模块作为脚本执行
PEP 由 Nick Coghlan 撰写并实现。
Until Python 2.5, the try statement came in two flavours. You could use a finally block to ensure that code is always executed, or one or more except blocks to catch specific exceptions. You couldn’t combine both except
blocks and a finally
block, because generating the right bytecode for the combined version was complicated and it wasn’t clear what the semantics of the combined statement should be.
Guido van Rossum spent some time working with Java, which does support the equivalent of combining except blocks and a finally block, and this clarified what the statement should mean. In Python 2.5, you can now write:
try:
block-1 ...
except Exception1:
handler-1 ...
except Exception2:
handler-2 ...
else:
else-block
finally:
final-block
The code in block-1 is executed. If the code raises an exception, the various except blocks are tested: if the exception is of class Exception1
, handler-1 is executed; otherwise if it’s of class Exception2
, handler-2 is executed, and so forth. If no exception is raised, the else-block is executed.
No matter what happened previously, the final-block is executed once the code block is complete and any raised exceptions handled. Even if there’s an error in an exception handler or the else-block and a new exception is raised, the code in the final-block is still run.
参见
PEP 341 - 统一 try-except 和 try-finally
PEP 由 Georg Brandl 撰写,由 Thomas Lee 实现。
Python 2.5 adds a simple way to pass values into a generator. As introduced in Python 2.3, generators only produce output; once a generator’s code was invoked to create an iterator, there was no way to pass any new information into the function when its execution is resumed. Sometimes the ability to pass in some information would be useful. Hackish solutions to this include making the generator’s code look at a global variable and then changing the global variable’s value, or passing in some mutable object that callers then modify.
To refresh your memory of basic generators, here’s a simple example:
def counter (maximum):
i = 0
while i < maximum:
yield i
i += 1
When you call counter(10)
, the result is an iterator that returns the values from 0 up to 9. On encountering the yield statement, the iterator returns the provided value and suspends the function’s execution, preserving the local variables. Execution resumes on the following call to the iterator’s next() method, picking up after the yield
statement.
In Python 2.3, yield was a statement; it didn’t return any value. In 2.5, yield
is now an expression, returning a value that can be assigned to a variable or otherwise operated on:
val = (yield i)
I recommend that you always put parentheses around a yield expression when you’re doing something with the returned value, as in the above example. The parentheses aren’t always necessary, but it’s easier to always add them instead of having to remember when they’re needed.
(PEP 342 explains the exact rules, which are that a yield-expression must always be parenthesized except when it occurs at the top-level expression on the right-hand side of an assignment. This means you can write val = yield i
but have to use parentheses when there’s an operation, as in val = (yield i) + 12
.)
Values are sent into a generator by calling its send(value)
method. The generator’s code is then resumed and the yield expression returns the specified value. If the regular next() method is called, the yield
returns None.
Here’s the previous example, modified to allow changing the value of the internal counter.
def counter (maximum):
i = 0
while i < maximum:
val = (yield i)
# If value provided, change counter
if val is not None:
i = val
else:
i += 1
And here’s an example of changing the counter:
>>> it = counter(10)
>>> print it.next()
0
>>> print it.next()
1
>>> print it.send(8)
8
>>> print it.next()
9
>>> print it.next()
Traceback (most recent call last):
File "t.py", line 15, in ?
print it.next()
StopIteration
yield will usually return None, so you should always check for this case. Don’t just use its value in expressions unless you’re sure that the send()
method will be the only method used to resume your generator function.
In addition to send()
, there are two other new methods on generators:
throw(type, value=None, traceback=None)
is used to raise an exception inside the generator; the exception is raised by the yield expression where the generator’s execution is paused.
close()
raises a new GeneratorExit exception inside the generator to terminate the iteration. On receiving this exception, the generator’s code must either raise GeneratorExit or StopIteration. Catching the GeneratorExit exception and returning a value is illegal and will trigger a RuntimeError; if the function raises some other exception, that exception is propagated to the caller. close()
will also be called by Python’s garbage collector when the generator is garbage-collected.
If you need to run cleanup code when a GeneratorExit occurs, I suggest using a try: ... finally:
suite instead of catching GeneratorExit.
这些改变的累积效应是,让生成器从单向的信息生产者变成了既是生产者,又是消费者。
Generators also become coroutines, a more generalized form of subroutines. Subroutines are entered at one point and exited at another point (the top of the function, and a return statement), but coroutines can be entered, exited, and resumed at many different points (the yield statements). We’ll have to figure out patterns for using coroutines effectively in Python.
The addition of the close()
method has one side effect that isn’t obvious. close()
is called when a generator is garbage-collected, so this means the generator’s code gets one last chance to run before the generator is destroyed. This last chance means that try...finally
statements in generators can now be guaranteed to work; the finally clause will now always get a chance to run. The syntactic restriction that you couldn’t mix yield statements with a try...finally
suite has therefore been removed. This seems like a minor bit of language trivia, but using generators and try...finally
is actually necessary in order to implement the with statement described by PEP 343. I’ll look at this new statement in the following section.
Another even more esoteric effect of this change: previously, the gi_frame
attribute of a generator was always a frame object. It’s now possible for gi_frame
to be None
once the generator has been exhausted.
参见
PEP 342 - 通过增强型生成器实现协程
PEP 由 Guido van Rossum 和 Phillip J. Eby 撰写,由 Phillip J. Eby 实现。包括一些更高级的使用生成器作为协程的示例。
这些功能的早期版本在 PEP 288 (由 Raymond Hettinger 撰写) 和 PEP 325 (由 Samuele Pedroni 撰写)中提出。
https://en.wikipedia.org/wiki/Coroutine
协程的Wikipedia条目。
https://web.archive.org/web/20160321211320/http://www.sidhe.org/~dan/blog/archives/000178.html
An explanation of coroutines from a Perl point of view, written by Dan Sugalski.
The ‘with‘ statement clarifies code that previously would use try...finally
blocks to ensure that clean-up code is executed. In this section, I’ll discuss the statement as it will commonly be used. In the next section, I’ll examine the implementation details and show how to write objects for use with this statement.
The ‘with‘ statement is a new control-flow structure whose basic structure is:
with expression [as variable]:
with-block
The expression is evaluated, and it should result in an object that supports the context management protocol (that is, has __enter__()
and __exit__()
methods.
The object’s __enter__()
is called before with-block is executed and therefore can run set-up code. It also may return a value that is bound to the name variable, if given. (Note carefully that variable is not assigned the result of expression.)
After execution of the with-block is finished, the object’s __exit__()
method is called, even if the block raised an exception, and can therefore run clean-up code.
To enable the statement in Python 2.5, you need to add the following directive to your module:
from __future__ import with_statement
该语句在Python 2.6 中始终启用。
Some standard Python objects now support the context management protocol and can be used with the ‘with‘ statement. File objects are one example:
with open('/etc/passwd', 'r') as f:
for line in f:
print line
... more processing code ...
After this statement has executed, the file object in f will have been automatically closed, even if the for loop raised an exception part-way through the block.
备注
In this case, f is the same object created by open(), because file.__enter__()
returns self.
The threading module’s locks and condition variables also support the ‘with‘ statement:
lock = threading.Lock()
with lock:
# Critical section of code
...
The lock is acquired before the block is executed and always released once the block is complete.
The new localcontext()
function in the decimal module makes it easy to save and restore the current decimal context, which encapsulates the desired precision and rounding characteristics for computations:
from decimal import Decimal, Context, localcontext
# Displays with default precision of 28 digits
v = Decimal('578')
print v.sqrt()
with localcontext(Context(prec=16)):
# All code in this block uses a precision of 16 digits.
# The original context is restored on exiting the block.
print v.sqrt()
Under the hood, the ‘with‘ statement is fairly complicated. Most people will only use ‘with
‘ in company with existing objects and don’t need to know these details, so you can skip the rest of this section if you like. Authors of new objects will need to understand the details of the underlying implementation and should keep reading.
在更高层级上对于上下文管理器协议的解释:
The expression is evaluated and should result in an object called a “context manager”. The context manager must have __enter__()
and __exit__()
methods.
The context manager’s __enter__()
method is called. The value returned is assigned to VAR. If no 'as VAR'
clause is present, the value is simply discarded.
BLOCK 中的代码会被执行。
If BLOCK raises an exception, the __exit__(type, value, traceback)
is called with the exception details, the same values returned by sys.exc_info(). The method’s return value controls whether the exception is re-raised: any false value re-raises the exception, and True
will result in suppressing it. You’ll only rarely want to suppress the exception, because if you do the author of the code containing the ‘with‘ statement will never realize anything went wrong.
If BLOCK didn’t raise an exception, the __exit__()
method is still called, but type, value, and traceback are all None
.
Let’s think through an example. I won’t present detailed code but will only sketch the methods necessary for a database that supports transactions.
(For people unfamiliar with database terminology: a set of changes to the database are grouped into a transaction. Transactions can be either committed, meaning that all the changes are written into the database, or rolled back, meaning that the changes are all discarded and the database is unchanged. See any database textbook for more information.)
Let’s assume there’s an object representing a database connection. Our goal will be to let the user write code like this:
db_connection = DatabaseConnection()
with db_connection as cursor:
cursor.execute('insert into ...')
cursor.execute('delete from ...')
# ... more operations ...
The transaction should be committed if the code in the block runs flawlessly or rolled back if there’s an exception. Here’s the basic interface for DatabaseConnection
that I’ll assume:
class DatabaseConnection:
# Database interface
def cursor (self):
"Returns a cursor object and starts a new transaction"
def commit (self):
"Commits current transaction"
def rollback (self):
"Rolls back current transaction"
The __enter__()
method is pretty easy, having only to start a new transaction. For this application the resulting cursor object would be a useful result, so the method will return it. The user can then add as cursor
to their ‘with‘ statement to bind the cursor to a variable name.
class DatabaseConnection:
...
def __enter__ (self):
# Code to start a new transaction
cursor = self.cursor()
return cursor
The __exit__()
method is the most complicated because it’s where most of the work has to be done. The method has to check if an exception occurred. If there was no exception, the transaction is committed. The transaction is rolled back if there was an exception.
In the code below, execution will just fall off the end of the function, returning the default value of None
. None
is false, so the exception will be re-raised automatically. If you wished, you could be more explicit and add a return statement at the marked location.
class DatabaseConnection:
...
def __exit__ (self, type, value, tb):
if tb is None:
# No exception, so commit
self.commit()
else:
# Exception occurred, so rollback.
self.rollback()
# return False
The new contextlib module provides some functions and a decorator that are useful for writing objects for use with the ‘with‘ statement.
The decorator is called contextmanager()
, and lets you write a single generator function instead of defining a new class. The generator should yield exactly one value. The code up to the yield will be executed as the __enter__()
method, and the value yielded will be the method’s return value that will get bound to the variable in the ‘with‘ statement’s as
clause, if any. The code after the yield will be executed in the __exit__()
method. Any exception raised in the block will be raised by the yield
statement.
Our database example from the previous section could be written using this decorator as:
from contextlib import contextmanager
@contextmanager
def db_transaction (connection):
cursor = connection.cursor()
try:
yield cursor
except:
connection.rollback()
raise
else:
connection.commit()
db = DatabaseConnection()
with db_transaction(db) as cursor:
...
The contextlib module also has a nested(mgr1, mgr2, ...)
function that combines a number of context managers so you don’t need to write nested ‘with‘ statements. In this example, the single ‘with
‘ statement both starts a database transaction and acquires a thread lock:
lock = threading.Lock()
with nested (db_transaction(db), lock) as (cursor, locked):
...
Finally, the closing(object)
function returns object so that it can be bound to a variable, and calls object.close
at the end of the block.
import urllib, sys
from contextlib import closing
with closing(urllib.urlopen('http://www.yahoo.com')) as f:
for line in f:
sys.stdout.write(line)
参见
PEP 343 - “with” 语句
PEP written by Guido van Rossum and Nick Coghlan; implemented by Mike Bland, Guido van Rossum, and Neal Norwitz. The PEP shows the code generated for a ‘with‘ statement, which can be helpful in learning how the statement works.
contextlib 模块的文档。
Exception classes can now be new-style classes, not just classic classes, and the built-in Exception class and all the standard built-in exceptions (NameError, ValueError, etc.) are now new-style classes.
The inheritance hierarchy for exceptions has been rearranged a bit. In 2.5, the inheritance relationships are:
BaseException # New in Python 2.5
|- KeyboardInterrupt
|- SystemExit
|- Exception
|- (all other current built-in exceptions)
This rearrangement was done because people often want to catch all exceptions that indicate program errors. KeyboardInterrupt and SystemExit aren’t errors, though, and usually represent an explicit action such as the user hitting Control-C or code calling sys.exit(). A bare except:
will catch all exceptions, so you commonly need to list KeyboardInterrupt and SystemExit in order to re-raise them. The usual pattern is:
try:
...
except (KeyboardInterrupt, SystemExit):
raise
except:
# Log error...
# Continue running program...
In Python 2.5, you can now write except Exception
to achieve the same result, catching all the exceptions that usually indicate errors but leaving KeyboardInterrupt and SystemExit alone. As in previous versions, a bare except:
still catches all exceptions.
The goal for Python 3.0 is to require any class raised as an exception to derive from BaseException or some descendant of BaseException, and future releases in the Python 2.x series may begin to enforce this constraint. Therefore, I suggest you begin making all your exception classes derive from Exception now. It’s been suggested that the bare except:
form should be removed in Python 3.0, but Guido van Rossum hasn’t decided whether to do this or not.
Raising of strings as exceptions, as in the statement raise "Error occurred"
, is deprecated in Python 2.5 and will trigger a warning. The aim is to be able to remove the string-exception feature in a few releases.
参见
PEP 352 - 异常所需的超类
PEP 由 Brett Cannon 和 Guido van Rossum 撰写,由 Brett Cannon 实现
A wide-ranging change to Python’s C API, using a new Py_ssize_t type definition instead of int, will permit the interpreter to handle more data on 64-bit platforms. This change doesn’t affect Python’s capacity on 32-bit platforms.
Various pieces of the Python interpreter used C’s int type to store sizes or counts; for example, the number of items in a list or tuple were stored in an int. The C compilers for most 64-bit platforms still define int as a 32-bit type, so that meant that lists could only hold up to 2**31 - 1
= 2147483647 items. (There are actually a few different programming models that 64-bit C compilers can use — see https://unix.org/version2/whatsnew/lp64_wp.html for a discussion — but the most commonly available model leaves int as 32 bits.)
A limit of 2147483647 items doesn’t really matter on a 32-bit platform because you’ll run out of memory before hitting the length limit. Each list item requires space for a pointer, which is 4 bytes, plus space for a PyObject representing the item. 2147483647*4 is already more bytes than a 32-bit address space can contain.
It’s possible to address that much memory on a 64-bit platform, however. The pointers for a list that size would only require 16 GiB of space, so it’s not unreasonable that Python programmers might construct lists that large. Therefore, the Python interpreter had to be changed to use some type other than int, and this will be a 64-bit type on 64-bit platforms. The change will cause incompatibilities on 64-bit machines, so it was deemed worth making the transition now, while the number of 64-bit users is still relatively small. (In 5 or 10 years, we may all be on 64-bit machines, and the transition would be more painful then.)
This change most strongly affects authors of C extension modules. Python strings and container types such as lists and tuples now use Py_ssize_t to store their size. Functions such as PyList_Size() now return Py_ssize_t. Code in extension modules may therefore need to have some variables changed to Py_ssize_t.
The PyArg_ParseTuple() and Py_BuildValue() functions have a new conversion code, n
, for Py_ssize_t. PyArg_ParseTuple()‘s s#
and t#
still output int by default, but you can define the macro PY_SSIZE_T_CLEAN
before including Python.h
to make them return Py_ssize_t.
PEP 353 has a section on conversion guidelines that extension authors should read to learn about supporting 64-bit platforms.
参见
PEP 353 - 使用ssize_t作为索引类型
PEP 由 Martin von Löwis 撰写并实现。
The NumPy developers had a problem that could only be solved by adding a new special method, __index__()
. When using slice notation, as in [start:stop:step]
, the values of the start, stop, and step indexes must all be either integers or long integers. NumPy defines a variety of specialized integer types corresponding to unsigned and signed integers of 8, 16, 32, and 64 bits, but there was no way to signal that these types could be used as slice indexes.
Slicing can’t just use the existing __int__()
method because that method is also used to implement coercion to integers. If slicing used __int__()
, floating-point numbers would also become legal slice indexes and that’s clearly an undesirable behaviour.
Instead, a new special method called __index__()
was added. It takes no arguments and returns an integer giving the slice index to use. For example:
class C:
def __index__ (self):
return self.value
The return value must be either a Python integer or long integer. The interpreter will check that the type returned is correct, and raises a TypeError if this requirement isn’t met.
A corresponding nb_index
slot was added to the C-level PyNumberMethods structure to let C extensions implement this protocol. PyNumber_Index(obj)
can be used in extension code to call the __index__()
function and retrieve its result.
参见
PEP 357 - 允许将任何对象用于切片
PEP 由 Travis Oliphant 撰写并实现。
以下是 Python 2.5 针对核心 Python 语言的所有改变。
The dict type has a new hook for letting subclasses provide a default value when a key isn’t contained in the dictionary. When a key isn’t found, the dictionary’s __missing__(key)
method will be called. This hook is used to implement the new defaultdict
class in the collections module. The following example defines a dictionary that returns zero for any missing key:
class zerodict (dict):
def __missing__ (self, key):
return 0
d = zerodict({1:1, 2:2})
print d[1], d[2] # Prints 1, 2
print d[3], d[4] # Prints 0, 0
Both 8-bit and Unicode strings have new partition(sep)
and rpartition(sep)
methods that simplify a common use case.
The find(S)
method is often used to get an index which is then used to slice the string and obtain the pieces that are before and after the separator. partition(sep)
condenses this pattern into a single method call that returns a 3-tuple containing the substring before the separator, the separator itself, and the substring after the separator. If the separator isn’t found, the first element of the tuple is the entire string and the other two elements are empty. rpartition(sep)
also returns a 3-tuple but starts searching from the end of the string; the r
stands for ‘reverse’.
示例如下:
>>> ('http://www.python.org').partition('://')
('http', '://', 'www.python.org')
>>> ('file:/usr/share/doc/index.html').partition('://')
('file:/usr/share/doc/index.html', '', '')
>>> (u'Subject: a quick question').partition(':')
(u'Subject', u':', u' a quick question')
>>> 'www.python.org'.rpartition('.')
('www.python', '.', 'org')
>>> 'www.python.org'.rpartition(':')
('', '', 'www.python.org')
(Implemented by Fredrik Lundh following a suggestion by Raymond Hettinger.)
The startswith()
and endswith()
methods of string types now accept tuples of strings to check for.
def is_image_file (filename):
return filename.endswith(('.gif', '.jpg', '.tiff'))
(Implemented by Georg Brandl following a suggestion by Tom Lynn.)
The min() and max() built-in functions gained a key
keyword parameter analogous to the key
argument for sort()
. This parameter supplies a function that takes a single argument and is called for every value in the list; min()/max() will return the element with the smallest/largest return value from this function. For example, to find the longest string in a list, you can do:
L = ['medium', 'longest', 'short']
# Prints 'longest'
print max(L, key=len)
# Prints 'short', because lexicographically 'short' has the largest value
print max(L)
(由 Steven Bethard 和 Raymond Hettinger 贡献。)
Two new built-in functions, any() and all(), evaluate whether an iterator contains any true or false values. any() returns True if any value returned by the iterator is true; otherwise it will return False. all() returns True only if all of the values returned by the iterator evaluate as true. (Suggested by Guido van Rossum, and implemented by Raymond Hettinger.)
The result of a class’s __hash__()
method can now be either a long integer or a regular integer. If a long integer is returned, the hash of that value is taken. In earlier versions the hash value was required to be a regular integer, but in 2.5 the id() built-in was changed to always return non-negative numbers, and users often seem to use id(self)
in __hash__()
methods (though this is discouraged).
ASCII is now the default encoding for modules. It’s now a syntax error if a module contains string literals with 8-bit characters but doesn’t have an encoding declaration. In Python 2.4 this triggered a warning, not a syntax error. See PEP 263 for how to declare a module’s encoding; for example, you might add a line like this near the top of the source file:
# -*- coding: latin1 -*-
A new warning, UnicodeWarning, is triggered when you attempt to compare a Unicode string a
网页题目:创新互联Python教程:Python2.5有什么新变化
网址分享:http://www.shufengxianlan.com/qtweb/news16/412316.html
网站建设、网络推广公司-创新互联,是专注品牌与效果的网站制作,网络营销seo公司;服务项目有等
声明:本网站发布的内容(图片、视频和文字)以用户投稿、用户转载内容为主,如果涉及侵权请尽快告知,我们将会在第一时间删除。文章观点不代表本网站立场,如需处理请联系客服。电话:028-86922220;邮箱:631063699@qq.com。内容未经允许不得转载,或转载时需注明来源: 创新互联