创新互联Python教程:Python3.2有什么新变化

python 3.2 有什么新变化

作者

Raymond Hettinger(译者:wh2099 at outlook dot com)

This article explains the new features in Python 3.2 as compared to 3.1. Python 3.2 was released on February 20, 2011. It focuses on a few highlights and gives a few examples. For full details, see the Misc/NEWS file.

参见

PEP 392 - Python 3.2 发布计划

PEP 384: 定义稳定的ABI

In the past, extension modules built for one Python version were often not usable with other Python versions. Particularly on Windows, every feature release of Python required rebuilding all extension modules that one wanted to use. This requirement was the result of the free access to Python interpreter internals that extension modules could use.

With Python 3.2, an alternative approach becomes available: extension modules which restrict themselves to a limited API (by defining Py_LIMITED_API) cannot use many of the internals, but are constrained to a set of API functions that are promised to be stable for several releases. As a consequence, extension modules built for 3.2 in that mode will also work with 3.3, 3.4, and so on. Extension modules that make use of details of memory structures can still be built, but will need to be recompiled for every feature release.

参见

PEP 384 - 定义稳定的ABI

PEP 由 Martin von Löwis 撰写

PEP 389: Argparse 命令行解析模块

A new module for command line parsing, argparse, was introduced to overcome the limitations of optparse which did not provide support for positional arguments (not just options), subcommands, required options and other common patterns of specifying and validating options.

This module has already had widespread success in the community as a third-party module. Being more fully featured than its predecessor, the argparse module is now the preferred module for command-line processing. The older module is still being kept available because of the substantial amount of legacy code that depends on it.

Here’s an annotated example parser showing features like limiting results to a set of choices, specifying a metavar in the help screen, validating that one or more positional arguments is present, and making a required option:

 
 
 
 
  1. import argparse
  2. parser = argparse.ArgumentParser(
  3. description = 'Manage servers', # main description for help
  4. epilog = 'Tested on Solaris and Linux') # displayed after help
  5. parser.add_argument('action', # argument name
  6. choices = ['deploy', 'start', 'stop'], # three allowed values
  7. help = 'action on each target') # help msg
  8. parser.add_argument('targets',
  9. metavar = 'HOSTNAME', # var name used in help msg
  10. nargs = '+', # require one or more targets
  11. help = 'url for target machines') # help msg explanation
  12. parser.add_argument('-u', '--user', # -u or --user option
  13. required = True, # make it a required argument
  14. help = 'login as user')

在命令字符串中调用解析器的示例:

 
 
 
 
  1. >>> cmd = 'deploy sneezy.example.com sleepy.example.com -u skycaptain'
  2. >>> result = parser.parse_args(cmd.split())
  3. >>> result.action
  4. 'deploy'
  5. >>> result.targets
  6. ['sneezy.example.com', 'sleepy.example.com']
  7. >>> result.user
  8. 'skycaptain'

解释器自动生成的帮助示例:

 
 
 
 
  1. >>> parser.parse_args('-h'.split())
  2. usage: manage_cloud.py [-h] -u USER
  3. {deploy,start,stop} HOSTNAME [HOSTNAME ...]
  4. Manage servers
  5. positional arguments:
  6. {deploy,start,stop} action on each target
  7. HOSTNAME url for target machines
  8. optional arguments:
  9. -h, --help show this help message and exit
  10. -u USER, --user USER login as user
  11. Tested on Solaris and Linux

一个非常好的 argparse 特性是可以定义子解析器,每个子解析器拥有它们自己的参数模式和帮助显示:

 
 
 
 
  1. import argparse
  2. parser = argparse.ArgumentParser(prog='HELM')
  3. subparsers = parser.add_subparsers()
  4. parser_l = subparsers.add_parser('launch', help='Launch Control') # first subgroup
  5. parser_l.add_argument('-m', '--missiles', action='store_true')
  6. parser_l.add_argument('-t', '--torpedos', action='store_true')
  7. parser_m = subparsers.add_parser('move', help='Move Vessel', # second subgroup
  8. aliases=('steer', 'turn')) # equivalent names
  9. parser_m.add_argument('-c', '--course', type=int, required=True)
  10. parser_m.add_argument('-s', '--speed', type=int, default=0)
 
 
 
 
  1. $ ./helm.py --help # top level help (launch and move)
  2. $ ./helm.py launch --help # help for launch options
  3. $ ./helm.py launch --missiles # set missiles=True and torpedos=False
  4. $ ./helm.py steer --course 180 --speed 5 # set movement parameters

参见

PEP 389 - 新的命令行解析模块

PEP 由 Steven Bethard 撰写

参阅 升级 optparse 代码 了解与 optparse 的差异的细节。

PEP 391: 基于字典的日志配置

The logging module provided two kinds of configuration, one style with function calls for each option or another style driven by an external file saved in a ConfigParser format. Those options did not provide the flexibility to create configurations from JSON or YAML files, nor did they support incremental configuration, which is needed for specifying logger options from a command line.

To support a more flexible style, the module now offers logging.config.dictConfig() for specifying logging configuration with plain Python dictionaries. The configuration options include formatters, handlers, filters, and loggers. Here’s a working example of a configuration dictionary:

 
 
 
 
  1. {"version": 1,
  2. "formatters": {"brief": {"format": "%(levelname)-8s: %(name)-15s: %(message)s"},
  3. "full": {"format": "%(asctime)s %(name)-15s %(levelname)-8s %(message)s"}
  4. },
  5. "handlers": {"console": {
  6. "class": "logging.StreamHandler",
  7. "formatter": "brief",
  8. "level": "INFO",
  9. "stream": "ext://sys.stdout"},
  10. "console_priority": {
  11. "class": "logging.StreamHandler",
  12. "formatter": "full",
  13. "level": "ERROR",
  14. "stream": "ext://sys.stderr"}
  15. },
  16. "root": {"level": "DEBUG", "handlers": ["console", "console_priority"]}}

If that dictionary is stored in a file called conf.json, it can be loaded and called with code like this:

 
 
 
 
  1. >>> import json, logging.config
  2. >>> with open('conf.json') as f:
  3. ... conf = json.load(f)
  4. ...
  5. >>> logging.config.dictConfig(conf)
  6. >>> logging.info("Transaction completed normally")
  7. INFO : root : Transaction completed normally
  8. >>> logging.critical("Abnormal termination")
  9. 2011-02-17 11:14:36,694 root CRITICAL Abnormal termination

参见

PEP 391 - 基于字典的日志配置

PEP 由 Vinay Sajip 撰写

PEP 3148: concurrent.futures 模块

Code for creating and managing concurrency is being collected in a new top-level namespace, concurrent. Its first member is a futures package which provides a uniform high-level interface for managing threads and processes.

The design for concurrent.futures was inspired by the java.util.concurrent package. In that model, a running call and its result are represented by a Future object that abstracts features common to threads, processes, and remote procedure calls. That object supports status checks (running or done), timeouts, cancellations, adding callbacks, and access to results or exceptions.

The primary offering of the new module is a pair of executor classes for launching and managing calls. The goal of the executors is to make it easier to use existing tools for making parallel calls. They save the effort needed to setup a pool of resources, launch the calls, create a results queue, add time-out handling, and limit the total number of threads, processes, or remote procedure calls.

Ideally, each application should share a single executor across multiple components so that process and thread limits can be centrally managed. This solves the design challenge that arises when each component has its own competing strategy for resource management.

Both classes share a common interface with three methods: submit() for scheduling a callable and returning a Future object; map() for scheduling many asynchronous calls at a time, and shutdown() for freeing resources. The class is a context manager and can be used in a with statement to assure that resources are automatically released when currently pending futures are done executing.

A simple of example of ThreadPoolExecutor is a launch of four parallel threads for copying files:

 
 
 
 
  1. import concurrent.futures, shutil
  2. with concurrent.futures.ThreadPoolExecutor(max_workers=4) as e:
  3. e.submit(shutil.copy, 'src1.txt', 'dest1.txt')
  4. e.submit(shutil.copy, 'src2.txt', 'dest2.txt')
  5. e.submit(shutil.copy, 'src3.txt', 'dest3.txt')
  6. e.submit(shutil.copy, 'src3.txt', 'dest4.txt')

参见

PEP 3148 — futures - 异步执行指令

PEP 由 Brian Quinlan 撰写

Code for Threaded Parallel URL reads, an example using threads to fetch multiple web pages in parallel.

Code for computing prime numbers in parallel, an example demonstrating ProcessPoolExecutor.

PEP 3147: PYC 仓库目录

Python’s scheme for caching bytecode in .pyc files did not work well in environments with multiple Python interpreters. If one interpreter encountered a cached file created by another interpreter, it would recompile the source and overwrite the cached file, thus losing the benefits of caching.

The issue of “pyc fights” has become more pronounced as it has become commonplace for Linux distributions to ship with multiple versions of Python. These conflicts also arise with CPython alternatives such as Unladen Swallow.

To solve this problem, Python’s import machinery has been extended to use distinct filenames for each interpreter. Instead of Python 3.2 and Python 3.3 and Unladen Swallow each competing for a file called “mymodule.pyc”, they will now look for “mymodule.cpython-32.pyc”, “mymodule.cpython-33.pyc”, and “mymodule.unladen10.pyc”. And to prevent all of these new files from cluttering source directories, the pyc files are now collected in a “__pycache__“ directory stored under the package directory.

Aside from the filenames and target directories, the new scheme has a few aspects that are visible to the programmer:

  • Imported modules now have a __cached__ attribute which stores the name of the actual file that was imported:

       
       
       
       
    1. >>> import collections
    2. >>> collections.__cached__
    3. 'c:/py32/lib/__pycache__/collections.cpython-32.pyc'
  • 针对每个解释器的唯一标签可以从 imp 模块访问:

       
       
       
       
    1. >>> import imp
    2. >>> imp.get_tag()
    3. 'cpython-32'
  • Scripts that try to deduce source filename from the imported file now need to be smarter. It is no longer sufficient to simply strip the “c” from a “.pyc” filename. Instead, use the new functions in the imp module:

       
       
       
       
    1. >>> imp.source_from_cache('c:/py32/lib/__pycache__/collections.cpython-32.pyc')
    2. 'c:/py32/lib/collections.py'
    3. >>> imp.cache_from_source('c:/py32/lib/collections.py')
    4. 'c:/py32/lib/__pycache__/collections.cpython-32.pyc'
  • The py_compile and compileall modules have been updated to reflect the new naming convention and target directory. The command-line invocation of compileall has new options: -i for specifying a list of files and directories to compile and -b which causes bytecode files to be written to their legacy location rather than __pycache__.

  • The importlib.abc module has been updated with new abstract base classes for loading bytecode files. The obsolete ABCs, PyLoader and PyPycLoader, have been deprecated (instructions on how to stay Python 3.1 compatible are included with the documentation).

参见

PEP 3147 - PYC 仓库目录

PEP 由 Barry Warsaw 撰写

PEP 3149: ABI Version Tagged .so Files

The PYC repository directory allows multiple bytecode cache files to be co-located. This PEP implements a similar mechanism for shared object files by giving them a common directory and distinct names for each version.

The common directory is “pyshared” and the file names are made distinct by identifying the Python implementation (such as CPython, PyPy, Jython, etc.), the major and minor version numbers, and optional build flags (such as “d” for debug, “m” for pymalloc, “u” for wide-unicode). For an arbitrary package “foo”, you may see these files when the distribution package is installed:

 
 
 
 
  1. /usr/share/pyshared/foo.cpython-32m.so
  2. /usr/share/pyshared/foo.cpython-33md.so

In Python itself, the tags are accessible from functions in the sysconfig module:

 
 
 
 
  1. >>> import sysconfig
  2. >>> sysconfig.get_config_var('SOABI') # find the version tag
  3. 'cpython-32mu'
  4. >>> sysconfig.get_config_var('EXT_SUFFIX') # find the full filename extension
  5. '.cpython-32mu.so'

参见

PEP 3149 - 带有 ABI 版本标签的 .so 文件

PEP 由 Barry Warsaw 撰写

PEP 3333: Python Web服务器网关接口v1.0.1

This informational PEP clarifies how bytes/text issues are to be handled by the WSGI protocol. The challenge is that string handling in Python 3 is most conveniently handled with the str type even though the HTTP protocol is itself bytes oriented.

The PEP differentiates so-called native strings that are used for request/response headers and metadata versus byte strings which are used for the bodies of requests and responses.

The native strings are always of type str but are restricted to code points between U+0000 through U+00FF which are translatable to bytes using Latin-1 encoding. These strings are used for the keys and values in the environment dictionary and for response headers and statuses in the start_response() function. They must follow RFC 2616 with respect to encoding. That is, they must either be ISO-8859-1 characters or use RFC 2047 MIME encoding.

For developers porting WSGI applications from Python 2, here are the salient points:

  • If the app already used strings for headers in Python 2, no change is needed.

  • If instead, the app encoded output headers or decoded input headers, then the headers will need to be re-encoded to Latin-1. For example, an output header encoded in utf-8 was using h.encode('utf-8') now needs to convert from bytes to native strings using h.encode('utf-8').decode('latin-1').

  • Values yielded by an application or sent using the write() method must be byte strings. The start_response() function and environ must use native strings. The two cannot be mixed.

For server implementers writing CGI-to-WSGI pathways or other CGI-style protocols, the users must to be able access the environment using native strings even though the underlying platform may have a different convention. To bridge this gap, the wsgiref module has a new function, wsgiref.handlers.read_environ() for transcoding CGI variables from os.environ into native strings and returning a new dictionary.

参见

PEP 3333 - Python Web服务器网关接口v1.0.1

PEP 由 Phillip Eby 撰写

其他语言特性修改

对Python 语言核心进行的小改动:

  • String formatting for format() and str.format() gained new capabilities for the format character #. Previously, for integers in binary, octal, or hexadecimal, it caused the output to be prefixed with ‘0b’, ‘0o’, or ‘0x’ respectively. Now it can also handle floats, complex, and Decimal, causing the output to always have a decimal point even when no digits follow it.

       
       
       
       
    1. >>> format(20, '#o')
    2. '0o24'
    3. >>> format(12.34, '#5.0f')
    4. ' 12.'

    (Suggested by Mark Dickinson and implemented by Eric Smith in bpo-7094.)

  • There is also a new str.format_map() method that extends the capabilities of the existing str.format() method by accepting arbitrary mapping objects. This new method makes it possible to use string formatting with any of Python’s many dictionary-like objects such as defaultdict, Shelf, ConfigParser, or dbm. It is also useful with custom dict subclasses that normalize keys before look-up or that supply a __missing__() method for unknown keys:

       
       
       
       
    1. >>> import shelve
    2. >>> d = shelve.open('tmp.shl')
    3. >>> 'The {project_name} status is {status} as of {date}'.format_map(d)
    4. 'The testing project status is green as of February 15, 2011'
    5. >>> class LowerCasedDict(dict):
    6. ... def __getitem__(self, key):
    7. ... return dict.__getitem__(self, key.lower())
    8. >>> lcd = LowerCasedDict(part='widgets', quantity=10)
    9. >>> 'There are {QUANTITY} {Part} in stock'.format_map(lcd)
    10. 'There are 10 widgets in stock'
    11. >>> class PlaceholderDict(dict):
    12. ... def __missing__(self, key):
    13. ... return '<{}>'.format(key)
    14. >>> 'Hello {name}, welcome to {location}'.format_map(PlaceholderDict())
    15. 'Hello , welcome to '

(由 Raymond Hettinger 提议并由 Eric Smith 在 bpo-6081 中贡献。)

  • The interpreter can now be started with a quiet option, -q, to prevent the copyright and version information from being displayed in the interactive mode. The option can be introspected using the sys.flags attribute:

       
       
       
       
    1. $ python -q
    2. >>> sys.flags
    3. sys.flags(debug=0, division_warning=0, inspect=0, interactive=0,
    4. optimize=0, dont_write_bytecode=0, no_user_site=0, no_site=0,
    5. ignore_environment=0, verbose=0, bytes_warning=0, quiet=1)

    (由 Marcin Wojdyr 在 bpo-1772833 中贡献。)

  • The hasattr() function works by calling getattr() and detecting whether an exception is raised. This technique allows it to detect methods created dynamically by __getattr__() or __getattribute__() which would otherwise be absent from the class dictionary. Formerly, hasattr would catch any exception, possibly masking genuine errors. Now, hasattr has been tightened to only catch AttributeError and let other exceptions pass through:

       
       
       
       
    1. >>> class A:
    2. ... @property
    3. ... def f(self):
    4. ... return 1 // 0
    5. ...
    6. >>> a = A()
    7. >>> hasattr(a, 'f')
    8. Traceback (most recent call last):
    9. ...
    10. ZeroDivisionError: integer division or modulo by zero

    (由 Yury Selivanov 发现并由 Benjamin Peterson 修正;bpo-9666。)

  • The str() of a float or complex number is now the same as its repr(). Previously, the str() form was shorter but that just caused confusion and is no longer needed now that the shortest possible repr() is displayed by default:

       
       
       
       
    1. >>> import math
    2. >>> repr(math.pi)
    3. '3.141592653589793'
    4. >>> str(math.pi)
    5. '3.141592653589793'

    (由 Mark Dickinson 提议并实现;bpo-9337。)

  • memoryview objects now have a release() method and they also now support the context management protocol. This allows timely release of any resources that were acquired when requesting a buffer from the original object.

       
       
       
       
    1. >>> with memoryview(b'abcdefgh') as v:
    2. ... print(v.tolist())
    3. [97, 98, 99, 100, 101, 102, 103, 104]

    (由 Antoine Pitrou 添加;bpo-9757。)

  • Previously it was illegal to delete a name from the local namespace if it occurs as a free variable in a nested block:

       
       
       
       
    1. def outer(x):
    2. def inner():
    3. return x
    4. inner()
    5. del x

    This is now allowed. Remember that the target of an except clause is cleared, so this code which used to work with Python 2.6, raised a SyntaxError with Python 3.1 and now works again:

       
       
       
       
    1. def f():
    2. def print_error():
    3. print(e)
    4. try:
    5. something
    6. except Exception as e:
    7. print_error()
    8. # implicit "del e" here

    (参见 bpo-4617。)

  • The internal structsequence tool now creates subclasses of tuple. This means that C structures like those returned by os.stat(), time.gmtime(), and sys.version_info now work like a named tuple and now work with functions and methods that expect a tuple as an argument. This is a big step forward in making the C structures as flexible as their pure Python counterparts:

       
       
       
       
    1. >>> import sys
    2. >>> isinstance(sys.version_info, tuple)
    3. True
    4. >>> 'Version %d.%d.%d %s(%d)' % sys.version_info
    5. 'Version 3.2.0 final(0)'

    (Suggested by Arfrever Frehtes Taifersar Arahesis and implemented by Benjamin Peterson in bpo-8413.)

  • Warnings are now easier to control using the PYTHONWARNINGS environment variable as an alternative to using -W at the command line:

       
       
       
       
    1. $ export PYTHONWARNINGS='ignore::RuntimeWarning::,once::UnicodeWarning::'

    (Suggested by Barry Warsaw and implemented by Philip Jenvey in bpo-7301.)

  • A new warning category, ResourceWarning, has been added. It is emitted when potential issues with resource consumption or cleanup are detected. It is silenced by default in normal release builds but can be enabled through the means provided by the warnings module, or on the command line.

    A ResourceWarning is issued at interpreter shutdown if the gc.garbage list isn’t empty, and if gc.DEBUG_UNCOLLECTABLE is set, all uncollectable objects are printed. This is meant to make the programmer aware that their code contains object finalization issues.

    A ResourceWarning is also issued when a file object is destroyed without having been explicitly closed. While the deallocator for such object ensures it closes the underlying operating system resource (usually, a file descriptor), the delay in deallocating the object could produce various issues, especially under Windows. Here is an example of enabling the warning from the command line:

       
       
       
       
    1. $ python -q -Wdefault
    2. >>> f = open("foo", "wb")
    3. >>> del f
    4. __main__:1: ResourceWarning: unclosed file <_io.BufferedWriter name='foo'>

    (Added by Antoine Pitrou and Georg Brandl in bpo-10093 and bpo-477863.)

  • range objects now support index and count methods. This is part of an effort to make more objects fully implement the collections.Sequence abstract base class. As a result, the language will have a more uniform API. In addition, range objects now support slicing and negative indices, even with values larger than sys.maxsize. This makes range more interoperable with lists:

       
       
       
       
    1. >>> range(0, 100, 2).count(10)
    2. 1
    3. >>> range(0, 100, 2).index(10)
    4. 5
    5. >>> range(0, 100, 2)[5]
    6. 10
    7. >>> range(0, 100, 2)[0:5]
    8. range(0, 10, 2)

    (由 Daniel Stutzbach 在 bpo-9213 中贡献,由 Alexander Belopolsky 在 bpo-2690 中贡献,由 Nick Coghlan 在 bpo-10889 中贡献。)

  • The callable() builtin function from Py2.x was resurrected. It provides a concise, readable alternative to using an abstract base class in an expression like isinstance(x, collections.Callable):

       
       
       
       
    1. >>> callable(max)
    2. True
    3. >>> callable(20)
    4. False

    (参见 bpo-10518。)

  • Python’s import mechanism can now load modules installed in directories with non-ASCII characters in the path name. This solved an aggravating problem with home directories for users with non-ASCII characters in their usernames.

(Required extensive work by Victor Stinner in bpo-9425.)

新增,改进和弃用的模块

Python’s standard library has undergone significant maintenance efforts and quality improvements.

The biggest news for Python 3.2 is that the email package, mailbox module, and nntplib modules now work correctly with the bytes/text model in Python 3. For the first time, there is correct handling of messages with mixed encodings.

Throughout the standard library, there has been more careful attention to encodings and text versus bytes issues. In particular, interactions with the operating system are now better able to exchange non-ASCII data using the Windows MBCS encoding, locale-aware encodings, or UTF-8.

Another significant win is the addition of substantially better support for SSL connections and security certificates.

In addition, more classes now implement a context manager to support convenient and reliable resource clean-up using a with statement.

email

The usability of the email package in Python 3 has been mostly fixed by the extensive efforts of R. David Murray. The problem was that emails are typically read and stored in the form of bytes rather than str text, and they may contain multiple encodings within a single email. So, the email package had to be extended to parse and generate email messages in bytes format.

  • New functions message_from_bytes() and message_from_binary_file(), and new classes BytesFeedParser and BytesParser allow binary message data to be parsed into model objects.

  • Given bytes input to the model, get_payload() will by default decode a message body that has a Content-Transfer-Encoding of 8bit using the charset specified in the MIME headers and return the resulting string.

  • Given bytes input to the model, Generator will convert message bodies that have a Content-Transfer-Encoding of 8bit to instead have a 7bit Content-Transfer-Encoding.

    Headers with unencoded non-ASCII bytes are deemed to be RFC 2047-encoded using the unknown-8bit character set.

  • A new class BytesGenerator produces bytes as output, preserving any unchanged non-ASCII data that was present in the input used to build the model, including message bodies with a Content-Transfer-Encoding of 8bit.

  • The smtplib SMTP class now accepts a byte string for the msg argument to the sendmail() method, and a new method, send_message() accepts a Message object and can optionally obtain the from_addr and to_addrs addresses directly from the object.

(Proposed and implemented by R. David Murray, bpo-4661 and bpo-10321.)

elementtree

The xml.etree.ElementTree package and its xml.etree.cElementTree counterpart have been updated to version 1.3.

Several new and useful functions and methods have been added:

  • xml.etree.ElementTree.fromstringlist() which builds an XML document from a sequence of fragments

  • xml.etree.ElementTree.register_namespace() for registering a global namespace prefix

  • xml.etree.ElementTree.tostringlist() for string representation including all sublists

  • xml.etree.ElementTree.Element.extend() for appending a sequence of zero or more elements

  • xml.etree.ElementTree.Element.iterfind() searches an element and subelements

  • xml.etree.ElementTree.Element.itertext() creates a text iterator over an element and its subelements

  • xml.etree.ElementTree.TreeBuilder.end() closes the current element

  • xml.etree.ElementTree.TreeBuilder.doctype() handles a doctype declaration

两个方法被弃用:

  • xml.etree.ElementTree.getchildren()list(elem) 替代。

  • xml.etree.ElementTree.getiterator()Element.iter 替代。

For details of the update, see Introducing ElementTree on Fredrik Lundh’s website.

(由 Florent Xicluna 和 Fredrik Lundh 在 bpo-6472 中贡献。)

functools

  • The functools module includes a new decorator for caching function calls. functools.lru_cache() can save repeated queries to an external resource whenever the results are expected to be the same.

    For example, adding a caching decorator to a database query function can save database accesses for popular searches:

       
       
       
       
    1. >>> import functools
    2. >>> @functools.lru_cache(maxsize=300)
    3. ... def get_phone_number(name):
    4. ... c = conn.cursor()
    5. ... c.execute('SELECT phonenumber FROM phonelist WHERE name=?', (name,))
    6. ... return c.fetchone()[0]
       
       
       
       
    1. >>> for name in user_requests:
    2. ... get_phone_number(name) # cached lookup

    To help with choosing an effective cache size, the wrapped function is instrumented for tracking cache statistics:

       
       
       
       
    1. >>> get_phone_number.cache_info()
    2. CacheInfo(hits=4805, misses=980, maxsize=300, currsize=300)

    If the phonelist table gets updated, the outdated contents of the cache can be cleared with:

       
       
       
       
    1. >>> get_phone_number.cache_clear()

    (Contributed by Raymond Hettinger and incorporating design ideas from Jim Baker, Miki Tebeka, and Nick Coghlan; see recipe 498245, recipe 577479, bpo-10586, and bpo-10593.)

  • The functools.wraps() decorator now adds a __wrapped__ attribute pointing to the original callable function. This allows wrapped functions to be introspected. It also copies __annotations__ if defined. And now it also gracefully skips over missing attributes such as __doc__ which might not be defined for the wrapped callable.

    In the above example, the cache can be removed by recovering the original function:

       
       
       
       
    1. >>> get_phone_number = get_phone_number.__wrapped__ # uncached function

    (By Nick Coghlan and Terrence Cole; bpo-9567, bpo-3445, and bpo-8814.)

  • To help write classes with rich comparison methods, a new decorator functools.total_ordering() will use existing equality and inequality methods to fill in the remaining methods.

    For example, supplying __eq__ and __lt__ will enable total_ordering() to fill-in __le__, __gt__ and __ge__:

       
       
       
       
    1. @total_ordering
    2. class Student:
    3. def __eq__(self, other):
    4. return ((self.lastname.lower(), self.firstname.lower()) ==
    5. (other.lastname.lower(), other.firstname.lower()))
    6. def __lt__(self, other):
    7. return ((self.lastname.lower(), self.firstname.lower()) <
    8. (other.lastname.lower(), other.firstname.lower()))

    With the total_ordering decorator, the remaining comparison methods are filled in automatically.

    (由 Raymond Hettinger 贡献。)

  • To aid in porting programs from Python 2, the functools.cmp_to_key() function converts an old-style comparison function to modern key function:

       
       
       
       
    1. >>> # locale-aware sort order
    2. >>> sorted(iterable, key=cmp_to_key(locale.strcoll))

    For sorting examples and a brief sorting tutorial, see the Sorting HowTo tutorial.

    (由 Raymond Hettinger 贡献。)

itertools

  • The itertools module has a new accumulate() function modeled on APL’s scan operator and Numpy’s accumulate function:

       
       
       
       
    1. >>> from itertools import accumulate
    2. >>> list(accumulate([8, 2, 50]))
    3. [8, 10, 60]
       
       
       
       
    1. >>> prob_dist = [0.1, 0.4, 0.2, 0.3]
    2. >>> list(accumulate(prob_dist)) # cumulative probability distribution
    3. [0.1, 0.5, 0.7, 1.0]

    For an example using accumulate(), see the examples for the random module.

    (Contributed by Raymond Hettinger and incorporating design suggestions from Mark Dickinson.)

collections

  • The collections.Counter class now has two forms of in-place subtraction, the existing -= operator for saturating subtraction and the new subtract() method for regular subtraction. The former is suitable for multisets which only have positive counts, and the latter is more suitable for use cases that allow negative counts:

       
       
       
       
    1. >>> from collections import Counter
    2. >>> tally = Counter(dogs=5, cats=3)
    3. >>> tally -= Counter(dogs=2, cats=8) # saturating subtraction
    4. >>> tally
    5. Counter({'dogs': 3})
       
       
       
       
    1. >>> tally = Counter(dogs=5, cats=3)
    2. >>> tally.subtract(dogs=2, cats=8) # regular subtraction
    3. >>> tally
    4. Counter({'dogs': 3, 'cats': -5})

    (由 Raymond Hettinger 贡献。)

  • The collections.OrderedDict class has a new method move_to_end() which takes an existing key and moves it to either the first or last position in the ordered sequence.

    The default is to move an item to the last position. This is equivalent of renewing an entry with od[k] = od.pop(k).

    A fast move-to-end operation is useful for resequencing entries. For example, an ordered dictionary can be used to track order of access by aging entries from the oldest to the most recently accessed.

       
       
       
       
    1. >>> from collections import OrderedDict
    2. >>> d = OrderedDict.fromkeys(['a', 'b', 'X', 'd', 'e'])
    3. >>> list(d)
    4. ['a', 'b', 'X', 'd', 'e']
    5. >>> d.move_to_end('X')
    6. >>> list(d)
    7. ['a', 'b', 'd', 'e', 'X']

    (由 Raymond Hettinger 贡献。)

  • The collections.deque class grew two new methods count() and reverse() that make them more substitutable for list objects:

       
       
                  

    分享名称:创新互联Python教程:Python3.2有什么新变化
    链接分享:http://www.shufengxianlan.com/qtweb/news1/328451.html

    成都网站建设公司_创新互联,为您提供App开发网页设计公司电子商务搜索引擎优化企业网站制作外贸建站

    广告

    声明:本网站发布的内容(图片、视频和文字)以用户投稿、用户转载内容为主,如果涉及侵权请尽快告知,我们将会在第一时间删除。文章观点不代表本网站立场,如需处理请联系客服。电话:028-86922220;邮箱:631063699@qq.com。内容未经允许不得转载,或转载时需注明来源: 创新互联