This regex implementation is backwards-compatible with the standard 're' module, but offers additional functionality.
The re module's behaviour with zero-width matches changed in Python 3.7, and this module follows that behaviour when compiled for Python 3.7.
Python 2 is no longer supported. The last release that supported Python 2 was 2021.11.10.
This module is targeted at CPython. It expects that all codepoints are the same width, so it won't behave properly with PyPy outside U+0000..U+007F because PyPy stores strings as UTF-8.
The regex module releases the GIL during matching on instances of the built-in (immutable) string classes, enabling other Python threads to run concurrently. It is also possible to force the regex module to release the GIL during matching by calling the matching methods with the keyword argument concurrent=True. The behaviour is undefined if the string changes during matching, so use it only when it is guaranteed that that won't happen.
This module supports Unicode 15.1.0. Full Unicode case-folding is supported.
There are 2 kinds of flag: scoped and global. Scoped flags can apply to only part of a pattern and can be turned on or off; global flags apply to the entire pattern and can only be turned on.
The scoped flags are: ASCII (?a), FULLCASE (?f), IGNORECASE (?i), LOCALE (?L), MULTILINE (?m), DOTALL (?s), UNICODE (?u), VERBOSE (?x), WORD (?w).
The global flags are: BESTMATCH (?b), ENHANCEMATCH (?e), POSIX (?p), REVERSE (?r), VERSION0 (?V0), VERSION1 (?V1).
If neither the ASCII, LOCALE nor UNICODE flag is specified, it will default to UNICODE if the regex pattern is a Unicode string and ASCII if it's a bytestring.
The ENHANCEMATCH flag makes fuzzy matching attempt to improve the fit of the next match that it finds.
The BESTMATCH flag makes fuzzy matching search for the best match instead of the next match.
In order to be compatible with the re module, this module has 2 behaviours:
Version 0 behaviour (old behaviour, compatible with the re module):
Please note that the re module's behaviour may change over time, and I'll endeavour to match that behaviour in version 0.
Version 1 behaviour (new behaviour, possibly different from the re module):
If no version is specified, the regex module will default to regex.DEFAULT_VERSION.
The regex module supports both simple and full case-folding for case-insensitive matches in Unicode. Use of full case-folding can be turned on using the FULLCASE flag. Please note that this flag affects how the IGNORECASE flag works; the FULLCASE flag itself does not turn on case-insensitive matching.
Version 0 behaviour: the flag is off by default.
Version 1 behaviour: the flag is on by default.
It's not possible to support both simple sets, as used in the re module, and nested sets at the same time because of a difference in the meaning of an unescaped "[" in a set.
For example, the pattern [[a-z]--[aeiou]] is treated in the version 0 behaviour (simple sets, compatible with the re module) as:
but in the version 1 behaviour (nested sets, enhanced behaviour) as:
Version 0 behaviour: only simple sets are supported.
Version 1 behaviour: nested sets and set operations are supported.
All groups have a group number, starting from 1.
Groups with the same group name will have the same group number, and groups with a different group name will have a different group number.
The same name can be used by more than one group, with later captures 'overwriting' earlier captures. All the captures of the group will be available from the captures method of the match object.
Group numbers will be reused across different branches of a branch reset, eg. (?|(first)|(second)) has only group 1. If groups have different group names then they will, of course, have different group numbers, eg. (?|(?P<foo>first)|(?P<bar>second)) has group 1 ("foo") and group 2 ("bar").
In the regex (\s+)(?|(?P<foo>[A-Z]+)|(\w+) (?P<foo>[0-9]+) there are 2 groups:
If you want to prevent (\w+) from being group 2, you need to name it (different name, different group number).
The issue numbers relate to the Python bug tracker, except where listed otherwise.
\p{Horiz_Space} or \p{H} matches horizontal whitespace and \p{Vert_Space} or \p{V} matches vertical whitespace.
The test of a conditional pattern can be a lookaround.
>>> regex.match(r'(?(?=\d)\d+|\w+)', '123abc') <regex.Match object; span=(0, 3), match='123'> >>> regex.match(r'(?(?=\d)\d+|\w+)', 'abc123') <regex.Match object; span=(0, 6), match='abc123'>
This is not quite the same as putting a lookaround in the first branch of a pair of alternatives.
>>> print(regex.match(r'(?:(?=\d)\d+\b|\w+)', '123abc')) <regex.Match object; span=(0, 6), match='123abc'> >>> print(regex.match(r'(?(?=\d)\d+\b|\w+)', '123abc')) None
In the first example, the lookaround matched, but the remainder of the first branch failed to match, and so the second branch was attempted, whereas in the second example, the lookaround matched, and the first branch failed to match, but the second branch was not attempted.
The POSIX standard for regex is to return the leftmost longest match. This can be turned on using the POSIX flag.
>>> # Normal matching. >>> regex.search(r'Mr|Mrs', 'Mrs') <regex.Match object; span=(0, 2), match='Mr'> >>> regex.search(r'one(self)?(selfsufficient)?', 'oneselfsufficient') <regex.Match object; span=(0, 7), match='oneself'> >>> # POSIX matching. >>> regex.search(r'(?p)Mr|Mrs', 'Mrs') <regex.Match object; span=(0, 3), match='Mrs'> >>> regex.search(r'(?p)one(self)?(selfsufficient)?', 'oneselfsufficient') <regex.Match object; span=(0, 17), match='oneselfsufficient'>
Note that it will take longer to find matches because when it finds a match at a certain position, it won't return that immediately, but will keep looking to see if there's another longer match there.
If there's no group called "DEFINE", then ... will be ignored except that any groups defined within it can be called and that the normal rules for numbering groups still apply.
>>> regex.search(r'(?(DEFINE)(?P<quant>\d+)(?P<item>\w+))(?&quant) (?&item)', '5 elephants') <regex.Match object; span=(0, 11), match='5 elephants'>
(*PRUNE) discards the backtracking info up to that point. When used in an atomic group or a lookaround, it won't affect the enclosing pattern.
(*SKIP) is similar to (*PRUNE), except that it also sets where in the text the next attempt to match will start. When used in an atomic group or a lookaround, it won't affect the enclosing pattern.
(*FAIL) causes immediate backtracking. (*F) is a permitted abbreviation.
Keeps the part of the entire match after the position where \K occurred; the part before it is discarded.
It does not affect what groups return.
>>> m = regex.search(r'(\w\w\K\w\w\w)', 'abcdef') >>> m[0] 'cde' >>> m[1] 'abcde' >>> >>> m = regex.search(r'(?r)(\w\w\K\w\w\w)', 'abcdef') >>> m[0] 'bc' >>> m[1] 'bcdef'
You can use subscripting to get the captures of a repeated group.
>>> m = regex.match(r"(\w)+", "abc") >>> m.expandf("{1}") 'c' >>> m.expandf("{1[0]} {1[1]} {1[2]}") 'a b c' >>> m.expandf("{1[-1]} {1[-2]} {1[-3]}") 'c b a' >>> >>> m = regex.match(r"(?P<letter>\w)+", "abc") >>> m.expandf("{letter}") 'c' >>> m.expandf("{letter[0]} {letter[1]} {letter[2]}") 'a b c' >>> m.expandf("{letter[-1]} {letter[-2]} {letter[-3]}") 'c b a'
This is in addition to the existing \g<...>.
The LOCALE flag is intended for legacy code and has limited support. You're still recommended to use Unicode instead.
A partial match is one that matches up to the end of string, but that string has been truncated and you want to know whether a complete match could be possible if the string had not been truncated.
Partial matches are supported by match, search, fullmatch and finditer with the partial keyword argument.
Match objects have a partial attribute, which is True if it's a partial match.
For example, if you wanted a user to enter a 4-digit number and check it character by character as it was being entered:
>>> pattern = regex.compile(r'\d{4}') >>> # Initially, nothing has been entered: >>> print(pattern.fullmatch('', partial=True)) <regex.Match object; span=(0, 0), match='', partial=True> >>> # An empty string is OK, but it's only a partial match. >>> # The user enters a letter: >>> print(pattern.fullmatch('a', partial=True)) None >>> # It'll never match. >>> # The user deletes that and enters a digit: >>> print(pattern.fullmatch('1', partial=True)) <regex.Match object; span=(0, 1), match='1', partial=True> >>> # It matches this far, but it's only a partial match. >>> # The user enters 2 more digits: >>> print(pattern.fullmatch('123', partial=True)) <regex.Match object; span=(0, 3), match='123', partial=True> >>> # It matches this far, but it's only a partial match. >>> # The user enters another digit: >>> print(pattern.fullmatch('1234', partial=True)) <regex.Match object; span=(0, 4), match='1234'> >>> # It's a complete match. >>> # If the user enters another digit: >>> print(pattern.fullmatch('12345', partial=True)) None >>> # It's no longer a match. >>> # This is a partial match: >>> pattern.match('123', partial=True).partial True >>> # This is a complete match: >>> pattern.match('1233', partial=True).partial False
Sometimes it's not clear how zero-width matches should be handled. For example, should .* match 0 characters directly after matching >0 characters?
# Python 3.7 and later >>> regex.sub('.*', 'x', 'test') 'xx' >>> regex.sub('.*?', '|', 'test') '|||||||||' # Python 3.6 and earlier >>> regex.sub('(?V0).*', 'x', 'test') 'x' >>> regex.sub('(?V1).*', 'x', 'test') 'xx' >>> regex.sub('(?V0).*?', '|', 'test') '|t|e|s|t|' >>> regex.sub('(?V1).*?', '|', 'test') '|||||||||'
capturesdict is a combination of groupdict and captures:
groupdict returns a dict of the named groups and the last capture of those groups.
captures returns a list of all the captures of a group
capturesdict returns a dict of the named groups and lists of all the captures of those groups.
>>> m = regex.match(r"(?:(?P<word>\w+) (?P<digits>\d+)\n)+", "one 1\ntwo 2\nthree 3\n") >>> m.groupdict() {'word': 'three', 'digits': '3'} >>> m.captures("word") ['one', 'two', 'three'] >>> m.captures("digits") ['1', '2', '3'] >>> m.capturesdict() {'word': ['one', 'two', 'three'], 'digits': ['1', '2', '3']}
allcaptures returns a list of all the captures of all the groups.
allspans returns a list of all the spans of the all captures of all the groups.
>>> m = regex.match(r"(?:(?P<word>\w+) (?P<digits>\d+)\n)+", "one 1\ntwo 2\nthree 3\n") >>> m.allcaptures() (['one 1\ntwo 2\nthree 3\n'], ['one', 'two', 'three'], ['1', '2', '3']) >>> m.allspans() ([(0, 20)], [(0, 3), (6, 9), (12, 17)], [(4, 5), (10, 11), (18, 19)])
Group names can be duplicated.
>>> # With optional groups: >>> >>> # Both groups capture, the second capture 'overwriting' the first. >>> m = regex.match(r"(?P<item>\w+)? or (?P<item>\w+)?", "first or second") >>> m.group("item") 'second' >>> m.captures("item") ['first', 'second'] >>> # Only the second group captures. >>> m = regex.match(r"(?P<item>\w+)? or (?P<item>\w+)?", " or second") >>> m.group("item") 'second' >>> m.captures("item") ['second'] >>> # Only the first group captures. >>> m = regex.match(r"(?P<item>\w+)? or (?P<item>\w+)?", "first or ") >>> m.group("item") 'first' >>> m.captures("item") ['first'] >>> >>> # With mandatory groups: >>> >>> # Both groups capture, the second capture 'overwriting' the first. >>> m = regex.match(r"(?P<item>\w*) or (?P<item>\w*)?", "first or second") >>> m.group("item") 'second' >>> m.captures("item") ['first', 'second'] >>> # Again, both groups capture, the second capture 'overwriting' the first. >>> m = regex.match(r"(?P<item>\w*) or (?P<item>\w*)", " or second") >>> m.group("item") 'second' >>> m.captures("item") ['', 'second'] >>> # And yet again, both groups capture, the second capture 'overwriting' the first. >>> m = regex.match(r"(?P<item>\w*) or (?P<item>\w*)", "first or ") >>> m.group("item") '' >>> m.captures("item") ['first', '']
fullmatch behaves like match, except that it must match all of the string.
>>> print(regex.fullmatch(r"abc", "abc").span()) (0, 3) >>> print(regex.fullmatch(r"abc", "abcx")) None >>> print(regex.fullmatch(r"abc", "abcx", endpos=3).span()) (0, 3) >>> print(regex.fullmatch(r"abc", "xabcy", pos=1, endpos=4).span()) (1, 4) >>> >>> regex.match(r"a.*?", "abcd").group(0) 'a' >>> regex.fullmatch(r"a.*?", "abcd").group(0) 'abcd'
subf and subfn are alternatives to sub and subn respectively. When passed a replacement string, they treat it as a format string.
>>> regex.subf(r"(\w+) (\w+)", "{0} => {2} {1}", "foo bar") 'foo bar => bar foo' >>> regex.subf(r"(?P<word1>\w+) (?P<word2>\w+)", "{word2} {word1}", "foo bar") 'bar foo'
expandf is an alternative to expand. When passed a replacement string, it treats it as a format string.
>>> m = regex.match(r"(\w+) (\w+)", "foo bar") >>> m.expandf("{0} => {2} {1}") 'foo bar => bar foo' >>> >>> m = regex.match(r"(?P<word1>\w+) (?P<word2>\w+)", "foo bar") >>> m.expandf("{word2} {word1}") 'bar foo'
A match object contains a reference to the string that was searched, via its string attribute. The detach_string method will 'detach' that string, making it available for garbage collection, which might save valuable memory if that string is very large.
>>> m = regex.search(r"\w+", "Hello world") >>> print(m.group()) Hello >>> print(m.string) Hello world >>> m.detach_string() >>> print(m.group()) Hello >>> print(m.string) None
Recursive and repeated patterns are supported.
(?R) or (?0) tries to match the entire regex recursively. (?1), (?2), etc, try to match the relevant group.
(?&name) tries to match the named group.
>>> regex.match(r"(Tarzan|Jane) loves (?1)", "Tarzan loves Jane").groups() ('Tarzan',) >>> regex.match(r"(Tarzan|Jane) loves (?1)", "Jane loves Tarzan").groups() ('Jane',) >>> m = regex.search(r"(\w)(?:(?R)|(\w?))\1", "kayak") >>> m.group(0, 1, 2) ('kayak', 'k', None)
The first two examples show how the subpattern within the group is reused, but is _not_ itself a group. In other words, "(Tarzan|Jane) loves (?1)" is equivalent to "(Tarzan|Jane) loves (?:Tarzan|Jane)".
It's possible to backtrack into a recursed or repeated group.
You can't call a group if there is more than one group with that group name or group number ("ambiguous group reference").
The alternative forms (?P>name) and (?P&name) are also supported.
In version 1 behaviour, the regex module uses full case-folding when performing case-insensitive matches in Unicode.
>>> regex.match(r"(?iV1)strasse", "stra\N{LATIN SMALL LETTER SHARP S}e").span() (0, 6) >>> regex.match(r"(?iV1)stra\N{LATIN SMALL LETTER SHARP S}e", "STRASSE").span() (0, 7)
In version 0 behaviour, it uses simple case-folding for backward compatibility with the re module.
Regex usually attempts an exact match, but sometimes an approximate, or "fuzzy", match is needed, for those cases where the text being searched may contain errors in the form of inserted, deleted or substituted characters.
A fuzzy regex specifies which types of errors are permitted, and, optionally, either the minimum and maximum or only the maximum permitted number of each type. (You cannot specify only a minimum.)
The 3 types of error are:
In addition, "e" indicates any type of error.
The fuzziness of a regex item is specified between "{" and "}" after the item.
Examples:
If a certain type of error is specified, then any type not specified will not be permitted.
In the following examples I'll omit the item and write only the fuzziness:
It's also possible to state the costs of each type of error and the maximum permitted total cost.
Examples:
You can also use "<" instead of "<=" if you want an exclusive minimum or maximum.
You can add a test to perform on a character that's substituted or inserted.
Examples:
By default, fuzzy matching searches for the first match that meets the given constraints. The ENHANCEMATCH flag will cause it to attempt to improve the fit (i.e. reduce the number of errors) of the match that it has found.
The BESTMATCH flag will make it search for the best match instead.
Further examples to note:
In the first two examples there are perfect matches later in the string, but in neither case is it the first possible match.
The match object has an attribute fuzzy_counts which gives the total number of substitutions, insertions and deletions.
>>> # A 'raw' fuzzy match: >>> regex.fullmatch(r"(?:cats|cat){e<=1}", "cat").fuzzy_counts (0, 0, 1) >>> # 0 substitutions, 0 insertions, 1 deletion. >>> # A better match might be possible if the ENHANCEMATCH flag used: >>> regex.fullmatch(r"(?e)(?:cats|cat){e<=1}", "cat").fuzzy_counts (0, 0, 0) >>> # 0 substitutions, 0 insertions, 0 deletions.
The match object also has an attribute fuzzy_changes which gives a tuple of the positions of the substitutions, insertions and deletions.
>>> m = regex.search('(fuu){i<=2,d<=2,e<=5}', 'anaconda foo bar') >>> m <regex.Match object; span=(7, 10), match='a f', fuzzy_counts=(0, 2, 2)> >>> m.fuzzy_changes ([], [7, 8], [10, 11])
What this means is that if the matched part of the string had been:
'anacondfuuoo bar'
it would've been an exact match.
However, there were insertions at positions 7 and 8:
'anaconda fuuoo bar' ^^
and deletions at positions 10 and 11:
'anaconda f~~oo bar' ^^
So the actual string was:
'anaconda foo bar'
There are occasions where you may want to include a list (actually, a set) of options in a regex.
One way is to build the pattern like this:
>>> p = regex.compile(r"first|second|third|fourth|fifth")
but if the list is large, parsing the resulting regex can take considerable time, and care must also be taken that the strings are properly escaped and properly ordered, for example, "cats" before "cat".
The new alternative is to use a named list:
>>> option_set = ["first", "second", "third", "fourth", "fifth"] >>> p = regex.compile(r"\L<options>", options=option_set)
The order of the items is irrelevant, they are treated as a set. The named lists are available as the .named_lists attribute of the pattern object :
>>> print(p.named_lists) {'options': frozenset({'third', 'first', 'fifth', 'fourth', 'second'})}
If there are any unused keyword arguments, ValueError will be raised unless you tell it otherwise:
>>> option_set = ["first", "second", "third", "fourth", "fifth"] >>> p = regex.compile(r"\L<options>", options=option_set, other_options=[]) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python310\lib\site-packages\regex\regex.py", line 353, in compile return _compile(pattern, flags, ignore_unused, kwargs, cache_pattern) File "C:\Python310\lib\site-packages\regex\regex.py", line 500, in _compile complain_unused_args() File "C:\Python310\lib\site-packages\regex\regex.py", line 483, in complain_unused_args raise ValueError('unused keyword argument {!a}'.format(any_one)) ValueError: unused keyword argument 'other_options' >>> p = regex.compile(r"\L<options>", options=option_set, other_options=[], ignore_unused=True) >>> p = regex.compile(r"\L<options>", options=option_set, other_options=[], ignore_unused=False) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python310\lib\site-packages\regex\regex.py", line 353, in compile return _compile(pattern, flags, ignore_unused, kwargs, cache_pattern) File "C:\Python310\lib\site-packages\regex\regex.py", line 500, in _compile complain_unused_args() File "C:\Python310\lib\site-packages\regex\regex.py", line 483, in complain_unused_args raise ValueError('unused keyword argument {!a}'.format(any_one)) ValueError: unused keyword argument 'other_options' >>>
\m matches at the start of a word.
\M matches at the end of a word.
Compare with \b, which matches at the start or end of a word.
Normally the only line separator is \n (\x0A), but if the WORD flag is turned on then the line separators are \x0D\x0A, \x0A, \x0B, \x0C and \x0D, plus \x85, \u2028 and \u2029 when working with Unicode.
This affects the regex dot ".", which, with the DOTALL flag turned off, matches any character except a line separator. It also affects the line anchors ^ and $ (in multiline mode).
Version 1 behaviour only
Set operators have been added, and a set [...] can include nested sets.
The operators, in order of increasing precedence, are:
Implicit union, ie, simple juxtaposition like in [ab], has the highest precedence. Thus, [ab&&cd] is the same as [[a||b]&&[c||d]].
Examples:
regex.escape has an additional keyword parameter special_only. When True, only 'special' regex characters, such as '?', are escaped.
>>> regex.escape("foo!?", special_only=False) 'foo\\!\\?' >>> regex.escape("foo!?", special_only=True) 'foo!\\?'
regex.escape has an additional keyword parameter literal_spaces. When True, spaces are not escaped.
>>> regex.escape("foo bar!?", literal_spaces=False) 'foo\\ bar!\\?' >>> regex.escape("foo bar!?", literal_spaces=True) 'foo bar!\\?'
A match object has additional methods which return information on all the successful matches of a repeated group. These methods are:
>>> m = regex.search(r"(\w{3})+", "123456789") >>> m.group(1) '789' >>> m.captures(1) ['123', '456', '789'] >>> m.start(1) 6 >>> m.starts(1) [0, 3, 6] >>> m.end(1) 9 >>> m.ends(1) [3, 6, 9] >>> m.span(1) (6, 9) >>> m.spans(1) [(0, 3), (3, 6), (6, 9)]
If the following pattern subsequently fails, then the subpattern as a whole will fail.
(?:...)?+ ; (?:...)*+ ; (?:...)++ ; (?:...){min,max}+
The subpattern is matched up to 'max' times. If the following pattern subsequently fails, then all the repeated subpatterns will fail as a whole. For example, (?:...)++ is equivalent to (?>(?:...)+).
(?flags-flags:...)
The flags will apply only to the subpattern. Flags can be turned on or off.
The definition of a 'word' character has been expanded for Unicode. It conforms to the Unicode specification at http://www.unicode.org/reports/tr29/.
A lookbehind can match a variable-length string.
regex.split, regex.sub and regex.subn support a 'flags' argument.
regex.sub and regex.subn support 'pos' and 'endpos' arguments.
regex.findall and regex.finditer support an 'overlapped' flag which permits overlapped matches.
regex.splititer has been added. It's a generator equivalent of regex.split.
A match object accepts access to the groups via subscripting and slicing:
>>> m = regex.search(r"(?P<before>.*?)(?P<num>\d+)(?P<after>.*)", "pqr123stu") >>> print(m["before"]) pqr >>> print(len(m)) 4 >>> print(m[:]) ('pqr123stu', 'pqr', '123', 'stu')
Groups can be named with (?<name>...) as well as the existing (?P<name>...).
Groups can be referenced within a pattern with \g<name>. This also allows there to be more than 99 groups.
Named characters are supported. Note that only those known by Python's Unicode database will be recognised.
\p{property=value}; \P{property=value}; \p{value} ; \P{value}
Many Unicode properties are supported, including blocks and scripts. \p{property=value} or \p{property:value} matches a character whose property property has value value. The inverse of \p{property=value} is \P{property=value} or \p{^property=value}.
If the short form \p{value} is used, the properties are checked in the order: General_Category, Script, Block, binary property:
A short form starting with Is indicates a script or binary property:
A short form starting with In indicates a block property:
[[:alpha:]]; [[:^alpha:]]
POSIX character classes are supported. These are normally treated as an alternative form of \p{...}.
The exceptions are alnum, digit, punct and xdigit, whose definitions are different from those of Unicode.
[[:alnum:]] is equivalent to \p{posix_alnum}.
[[:digit:]] is equivalent to \p{posix_digit}.
[[:punct:]] is equivalent to \p{posix_punct}.
[[:xdigit:]] is equivalent to \p{posix_xdigit}.
A search anchor has been added. It matches at the position where each search started/continued and can be used for contiguous matches or in negative variable-length lookbehinds to limit how far back the lookbehind goes:
>>> regex.findall(r"\w{2}", "abcd ef") ['ab', 'cd', 'ef'] >>> regex.findall(r"\G\w{2}", "abcd ef") ['ab', 'cd']
Searches can also work backwards:
>>> regex.findall(r".", "abc") ['a', 'b', 'c'] >>> regex.findall(r"(?r).", "abc") ['c', 'b', 'a']
Note that the result of a reverse search is not necessarily the reverse of a forward search:
>>> regex.findall(r"..", "abcde") ['ab', 'cd'] >>> regex.findall(r"(?r)..", "abcde") ['de', 'bc']
The grapheme matcher is supported. It conforms to the Unicode specification at http://www.unicode.org/reports/tr29/.
Group numbers will be reused across the alternatives, but groups with different names will have different group numbers.
>>> regex.match(r"(?|(first)|(second))", "first").groups() ('first',) >>> regex.match(r"(?|(first)|(second))", "second").groups() ('second',)
Note that there is only one group.
The WORD flag changes the definition of a 'word boundary' to that of a default Unicode word boundary. This applies to \b and \B.
The matching methods and functions support timeouts. The timeout (in seconds) applies to the entire operation:
>>> from time import sleep >>> >>> def fast_replace(m): ... return 'X' ... >>> def slow_replace(m): ... sleep(0.5) ... return 'X' ... >>> regex.sub(r'[a-z]', fast_replace, 'abcde', timeout=2) 'XXXXX' >>> regex.sub(r'[a-z]', slow_replace, 'abcde', timeout=2) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python310\lib\site-packages\regex\regex.py", line 278, in sub return pat.sub(repl, string, count, pos, endpos, concurrent, timeout) TimeoutError: regex timed out