diff --git a/.travis.yml b/.travis.yml new file mode 100644 index 0000000..b8861da --- /dev/null +++ b/.travis.yml @@ -0,0 +1,30 @@ +sudo: false + +language: python + +python: + - pypy + - 2.7 + - 3.4 + - 3.5 + - 3.6 + - 3.7 + - 3.8 + - 3.9 + - 3.10-dev +install: + - pip install tox-travis codecov + +script: + - tox + +matrix: + fast_finish: true + + include: + - python: 3.9 + env: + - TOXENV=flake8 + +after_success: + - coverage combine && codecov diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 0000000..96cb109 --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,35 @@ +# Contributing + +Contributions to Scour are welcome, feel free to create a pull request! + +In order to be able to merge your PR as fast as possible please try to stick to the following guidelines. + +> _**TL;DR** (if you now what you're doing) – Always run [`make check`](https://github.com/scour-project/scour/blob/master/Makefile) before creating a PR to check for common problems._ + + +## Code Style + +The Scour project tries to follow the coding conventions described in [PEP 8 - The Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/). While there are some inconsistencies in existing code (e.g. with respect to naming conventions and the usage of globals), new code should always abide by the standard. + +To quickly check for common mistakes you can use [`flake8`](https://pypi.python.org/pypi/flake8). Our [Makefile](https://github.com/scour-project/scour/blob/master/Makefile) has a convenience target with the correct options: +```Makefile +make flake8 +``` + +## Unit Tests + +In order to check functionality of Scour and prevent any regressions in existing code a number of tests exist which use the [`unittest`](https://docs.python.org/library/unittest.html) unit testing framework which ships with Python. You can quickly run the tests by using the [Makefile](https://github.com/scour-project/scour/blob/master/Makefile) convenience target: +```Makefile +make test +``` + +These tests are run automatically on all PRs using [TravisCI](https://travis-ci.org/scour-project/scour) and have to pass at all times! When you add new functionality you should always include suitable tests with your PR (see [`test_scour.py`](https://github.com/scour-project/scour/blob/master/test_scour.py)). + +### Coverage + +To ensure that all possible code conditions are covered by a test you can use [`coverage`](https://pypi.python.org/pypi/coverage). The [Makefile](https://github.com/scour-project/scour/blob/master/Makefile) convenience target automatically creates an HTML report in `htmlcov/index.html`: +```Makefile +make coverage +``` + +These reports are also created automatically by our TravisCI builds and are accessible via [Codecov](https://codecov.io/gh/scour-project/scour) diff --git a/HISTORY.md b/HISTORY.md new file mode 100644 index 0000000..de0b503 --- /dev/null +++ b/HISTORY.md @@ -0,0 +1,347 @@ +# Release Notes for Scour + +## Version 0.38.2 (2020-11-22) +* Fix another regression caused by new feature to merge sibling groups ([#260](https://github.com/scour-project/scour/issues/260)) + +## Version 0.38.1 (2020-09-02) +* Fix regression caused by new feature to merge sibling groups ([#260](https://github.com/scour-project/scour/issues/260)) + +## Version 0.38 (2020-08-06) +* Fix issue with dropping xlink:href attribute when collapsing referenced gradients ([#206](https://github.com/scour-project/scour/pull/206)) +* Fix issue with dropping ID while de-duplicating gradients ([#207](https://github.com/scour-project/scour/pull/207)) +* Improve `--shorten-ids` so it re-maps IDs that are already used in the document if they're shorter ([#187](https://github.com/scour-project/scour/pull/187)) +* Fix whitespace handling for SVG 1.2 flowed text ([#235](https://github.com/scour-project/scour/issues/235)) +* Improvement: Merge sibling `` nodes with identical attributes ([#208](https://github.com/scour-project/scour/pull/208)) +* Improve performance of XML serialization ([#247](https://github.com/scour-project/scour/pull/247)) +* Improve performance of gradient de-duplication ([#248](https://github.com/scour-project/scour/pull/248)) +* Some general performance improvements ([#249](https://github.com/scour-project/scour/pull/249)) + +## Version 0.37 (2018-07-04) +* Fix escaping of quotes in attribute values. ([#152](https://github.com/scour-project/scour/pull/152)) +* A lot of performance improvements making processing significantly faster in many cases. ([#167](https://github.com/scour-project/scour/pull/167), [#169](https://github.com/scour-project/scour/pull/169), [#171](https://github.com/scour-project/scour/pull/171), [#185](https://github.com/scour-project/scour/pull/185)) +* Fix exception when removing duplicated gradients while `--keep-unreferenced-defs` is used ([#173](https://github.com/scour-project/scour/pull/173)) +* Remove some illegal optimizations of `m0 0` sub-path commands ([#178](https://github.com/scour-project/scour/pull/178)) +* Fix and improve handling of boolean flags in elliptical arc path commands ([#183](https://github.com/scour-project/scour/pull/183)) +* Fix exception when shorthand transform `scale(1)` with single number is used ([#191](https://github.com/scour-project/scour/pull/191)) +* Fix exception when using two-number forms of the filter attributes `baseFrequency`, `order`, `radius` and `stdDeviation` ([#192](https://github.com/scour-project/scour/pull/192)) +* Improve whitespace handling in text nodes fixing an issue where scouring added spaces in error and reducing file size in many cases ([#199](https://github.com/scour-project/scour/pull/199)) +* Drop official support for Python 3.3. (While it will probably continue to work for a while compatibility is not guaranteed anymore. If you continue to use Scour with Python 3.3 and should find/fix any compatibility issues pull requests are welcome, though.) + + +## Version 0.36 (2017-08-06) +* Fix embedding of raster images which was broken in most cases and did not work at all in Python 3. ([#120](https://github.com/scour-project/scour/issues/120)) +* Some minor fixes for statistics output. +* Greatly improve the algorithm to reduce numeric precision. + * Precision was not properly reduced for some numbers. + * Only use reduced precision if it results in a shorter string representation, otherwise preserve full precision in output (e.g. use "123" instead of "1e2" when precision is set to 1). + * Reduce precision of lengths in `viewBox` ([#127](https://github.com/scour-project/scour/issues/127)) + * Add option `--set-c-precision` which allows to set a reduced numeric precision for control points.
Control points determine how a path is bent in between two nodes and are less sensitive to a reduced precision than the position coordinates of the nodes themselves. This option can be used to save a few additional bytes without affecting visual appearance negatively. +* Fix: Unnecessary whitespace was not stripped from elliptical paths. ([#89](https://github.com/scour-project/scour/issues/89)) +* Improve and fix functionality to collapse straight paths segments. ([#146](https://github.com/scour-project/scour/issues/146)) + * Collapse subpaths of moveto `m` and lineto `l`commands if they have the same direction (before we only collapsed horizontal/vertical `h`/`v` lineto commands). + * Attempt to collapse lineto `l` commands into a preceding moveto `m` command (these are then called "implicit lineto commands") + * Do not collapse straight path segments in paths that have intermediate markers. ([#145](https://github.com/scour-project/scour/issues/145)) + * Preserve empty path segments if they have `stroke-linecap` set to `round` or `square`. They render no visible line but a tiny dot or square. + + +## Version 0.35 (2016-09-14) + +* Drop official support for Python 2.6. (While it will probably continue to work for a while compatibility is not guaranteed anymore. If you continue to use Scour with Python 2.6 and should find/fix any compatibility issues pull requests are welcome, though.) +* Fix: Unused IDs were not shortened when `--shorten-ids` was used. ([#19](https://github.com/scour-project/scour/issues/62)) +* Fix: Most elements were still removed from `` when `--keep-unreferenced-defs` was used. ([#62](https://github.com/scour-project/scour/issues/62)) +* Improve escaping of single/double quotes ('/") in attributes. ([#64](https://github.com/scour-project/scour/issues/64)) +* Print usage information if no input file was specified (and no data is available from `stdin`). ([#65](https://github.com/scour-project/scour/issues/65)) +* Redirect informational output to `stderr` when SVG is output to `stdout`. ([#67](https://github.com/scour-project/scour/issues/67)) +* Allow elements to be found via `Document.getElementById()` in the minidom document returned by scourXmlFile(). ([#68](https://github.com/scour-project/scour/issues/68)) +* Improve code to remove default attribute values and add a lot of new default values. ([#70](https://github.com/scour-project/scour/issues/70)) +* Fix: Only attempt to group elements that the content model allows to be children of a `` when `--create-groups` is specified. ([#98](https://github.com/scour-project/scour/issues/98)) +* Fix: Update list of SVG presentation attributes allowing more styles to be converted to attributes and remove two entries (`line-height` and `visibility`) that were actually invalid. ([#99](https://github.com/scour-project/scour/issues/99)) +* Add three options that work analogous to `--remove-metadata` (removes `` elements) ([#102](https://github.com/scour-project/scour/issues/102)) + * `--remove-titles` (removes `` elements) + * `--remove-descriptions` (removes `<desc>` elements) + * `--remove-descriptive-elements` (removes all of the descriptive elements, i.e. `<title>`, `<desc>` and `<metadata>`) +* Fix removal rules for the `overflow` attribute. ([#104](https://github.com/scour-project/scour/issues/104)) +* Improvement: Automatically order all attributes ([#105](https://github.com/scour-project/scour/issues/105)), as well as `style` declarations ([#107](https://github.com/scour-project/scour/issues/107)) allowing for a constant output across multiple runs of Scour. Before order could change arbitrarily. +* Improve path scouring. ([#108](https://github.com/scour-project/scour/issues/108))<br>Notably Scour performs all calculations with enhanced precision now, guaranteeing maximum accuracy when optimizing path data. Numerical precision is reduced as a last step of the optimization according to the `--precision` option. +* Fix replacement of removed duplicate gradients if the `fill`/`stroke` properties contained a fallback. ([#109](https://github.com/scour-project/scour/issues/109)) +* Fix conversion of cubic Bézier "curveto" commands into "shorthand/smooth curveto" commands. ([#110](https://github.com/scour-project/scour/issues/110)) +* Fix some issues due to removal of properties without considering inheritance rules. ([#111](https://github.com/scour-project/scour/issues/111)) + + +## Version 0.34 (2016-07-25) + +* Add a function to sanitize an arbitrary Python object containing options for Scour as attributes (usage: `Scour.sanitizeOptions(options)`).<br>This simplifies usage of the Scour module by other scripts while avoiding any compatibility issues that might arise when options are added/removed/renamed in Scour. ([#44](https://github.com/scour-project/scour/issues/44)) +* Input/output file can now be specified as positional arguments (e.g. `scour input.svg output.svg`). ([#46](https://github.com/scour-project/scour/issues/46)) +* Improve `--help` output by intuitively arranging options in groups. ([#46](https://github.com/scour-project/scour/issues/46)) +* Add option `--error-on-flowtext` to raise an exception whenever a non-standard `<flowText>` element is found (which is only supported in Inkscape). If this option is not specified a warning will be shown. ([#53](https://github.com/scour-project/scour/issues/53)) +* Automate tests with continuous integration via Travis. ([#52](https://github.com/scour-project/scour/issues/52)) + + +## Version 0.33 (2016-01-29) + +* Add support for removal of editor data of Sketch. ([#37](https://github.com/scour-project/scour/issues/37)) +* Add option `--verbose` (or `-v`) to show detailed statistics after running Scour. By default only a single line containing the most important information is output now. + + +## Version 0.32 (2015-12-10) + +* Add functionality to remove unused XML namespace declarations from the `<svg>` root element. ([#14](https://github.com/scour-project/scour/issues/14)) +* Restore unittests which were lost during move to GitHub. ([#24](https://github.com/scour-project/scour/issues/24)) +* Fix a potential regex matching issue in `points` attribute of `<polygon>` and `<polyline>` elements. ([#24](https://github.com/scour-project/scour/issues/24)) +* Fix a crash with `points` attribute of `<polygon>` and `<polyline>` starting with a negative number. ([#24](https://github.com/scour-project/scour/issues/24)) +* Fix encoding issues when input file contained unicode characters. ([#27](https://github.com/scour-project/scour/issues/27)) +* Fix encoding issues when using `stding`/`stdout` as input/output. ([#27](https://github.com/scour-project/scour/issues/27)) +* Fix removal of comments. If a node contained multiple comments usually not all of them were removed. ([#28](https://github.com/scour-project/scour/issues/28)) + + +## Version 0.31 (2015-11-16) + +* Ensure Python 3 compatibility. ([#8](https://github.com/scour-project/scour/issues/8)) +* Add option `--nindent` to set the number of spaces/tabs used for indentation (defaults to 1). ([#13](https://github.com/scour-project/scour/issues/13)) +* Add option `--no-line-breaks` to suppress output of line breaks and indentation altogether. ([#13](https://github.com/scour-project/scour/issues/13)) +* Add option `--strip-xml-space` which removes the specification of `xml:space="preserve"` on the `<svg>` root element which would otherwise disallow Scour to make any whitespace changes in output. ([#13](https://github.com/scour-project/scour/issues/13)) + + +## Version 0.30 (2014-08-05) + +* Fix ignoring of additional args when invoked from scons. + + +## Version 0.29 (2014-07-26) + +* Add option `--keep-unreferenced-defs` to preserve elements in `<defs>` that are not referenced and would be removed otherwise. ([#2](https://github.com/scour-project/scour/issues/2)) +* Add option to ignore unknown cmd line opts. + + +## Version 0.28 (2014-01-12) + +* Add option `--shorten-ids-prefix` which allows to add a custom prefix to all shortened IDs. ([#1](https://github.com/scour-project/scour/issues/1)) + + +## Version 0.27 (2013-10-26) + +* Allow direct calling of the Scour module. + + +## Version 0.26 (2013-10-22) + +* Re-release of Scour 0.26, re-packaged as a Python module [available from PyPI](https://pypi.python.org/pypi/scour) (Thanks to [Tobias Oberstet](https://github.com/oberstet)!). +* Development moved to GitHub (https://github.com/scour-project/scour). + + +## Version 0.26 (2011-05-09) + +* Fix [Bug 702423](https://bugs.launchpad.net/scour/+bug/702423) to function well in the presence of multiple identical gradients and `--disable-style-to-xml`. +* Fix [Bug 722544](https://bugs.launchpad.net/scour/+bug/722544) to properly optimize transformation matrices. Also optimize more things away in transformation specifications. (Thanks to Johan Sundström for the patch.) +* Fix [Bug 616150](https://bugs.launchpad.net/scour/+bug/616150) to run faster using the `--create-groups` option. +* Fix [Bug 708515](https://bugs.launchpad.net/scour/+bug/562784) to handle raster embedding better in the presence of file:// URLs. +* Fix [Bug 714717](https://bugs.launchpad.net/scour/+bug/714717) to avoid deleting renderable CurveTo commands in paths, which happen to end where they started. +* Per [Bug 714727](https://bugs.launchpad.net/scour/+bug/714727) and [Bug 714720](https://bugs.launchpad.net/scour/+bug/714720), Scour now deletes text attributes, including "text-align", from elements and groups of elements that only contain shapes. (Thanks to Jan Thor for the patches.) +* Per [Bug 714731](https://bugs.launchpad.net/scour/+bug/714731), remove the default value of more SVG attributes. (Thanks to Jan Thor for the patch.) +* Fix [Bug 717826](https://bugs.launchpad.net/scour/+bug/717826) to emit the correct line terminator (CR LF) in optimized SVG content on the version of Scour used in Inkscape on Windows. +* Fix [Bug 734933](https://bugs.launchpad.net/scour/+bug/734933) to avoid deleting renderable LineTo commands in paths, which happen to end where they started, if their stroke-linecap property has the value "round". +* Fix [Bug 717254](https://bugs.launchpad.net/scour/+bug/717254) to delete `<defs>` elements that become empty after unreferenced element removal. (Thanks to Jan Thor for the patch.) +* Fix [Bug 627372](https://bugs.launchpad.net/scour/+bug/627372) to future-proof the parameter passing between Scour and Inkscape. (Thanks to Bernd Feige for the patch.) +* Fix [Bug 638764](https://bugs.launchpad.net/scour/+bug/638764), which crashed Scour due to [Python Issue 2531](http://bugs.python.org/issue2531) regarding floating-point handling in ArcTo path commands. (Thanks to [Walther](https://launchpad.net/~walther-md) for investigating this bug.) +* Per [Bug 654759](https://bugs.launchpad.net/scour/+bug/654759), enable librsvg workarounds by default in Scour. +* Added ID change and removal protection options per [bug 492277](https://bugs.launchpad.net/scour/+bug/492277): `--protect-ids-noninkscape`, `--protect-ids-prefix`, `--protect-ids-list`. (Thanks to Jan Thor for this patch.) + + +## Version 0.25 (2010-07-11) + +* Fix [Bug 541889](https://bugs.launchpad.net/scour/+bug/541889) to parse polygon/polyline points missing whitespace/comma separating a negative value. Always output points attributes as comma-separated. +* Fix [Bug 519698](https://bugs.launchpad.net/scour/+bug/519698) to properly parse move commands that have line segments. +* Fix [Bug 577940](https://bugs.launchpad.net/scour/+bug/577940) to include stroke-dasharray into list of style properties turned into XML attributes. +* Fix [Bug 562784](https://bugs.launchpad.net/scour/+bug/562784), typo in Inkscape description +* Fix [Bug 603988](https://bugs.launchpad.net/scour/+bug/603988), do not commonize attributes if the element is referenced elsewhere. +* Fix [Bug 604000](https://bugs.launchpad.net/scour/+bug/604000), correctly remove default overflow attributes. +* Fix [Bug 603994](https://bugs.launchpad.net/scour/+bug/603994), fix parsing of `<style>` element contents when a CDATA is present +* Fix [Bug 583758](https://bugs.launchpad.net/scour/+bug/583758), added a bit to the Inkscape help text saying that groups aren't collapsed if IDs are also not stripped. +* Fix [Bug 583458](https://bugs.launchpad.net/scour/+bug/583458), another typo in the Inkscape help tab. +* Fix [Bug 594930](https://bugs.launchpad.net/scour/+bug/594930), In a `<switch>`, require one level of `<g>` if there was a `<g>` in the file already. Otherwise, only the first subelement of the `<g>` is chosen and rendered. +* Fix [Bug 576958](https://bugs.launchpad.net/scour/+bug/576958), "Viewbox option doesn't work when units are set", when renderer workarounds are disabled. +* Added many options: `--remove-metadata`, `--quiet`, `--enable-comment-stripping`, `--shorten-ids`, `--renderer-workaround`. + + +## Version 0.24 (2010-02-05) + +* Fix [Bug 517064](https://bugs.launchpad.net/scour/+bug/517064) to make XML well-formed again +* Fix [Bug 503750](https://bugs.launchpad.net/scour/+bug/503750) fix Inkscape extension to correctly pass `--enable-viewboxing` +* Fix [Bug 511186](https://bugs.launchpad.net/scour/+bug/511186) to allow comments outside of the root `<svg>` node + + +## Version 0.23 (2010-01-04) + +* Fix [Bug 482215](https://bugs.launchpad.net/scour/+bug/482215) by using os.linesep to end lines +* Fix unittests to run properly in Windows +* Removed default scaling of image to 100%/100% and creating a viewBox. Added `--enable-viewboxing` option to explicitly turn that on +* Fix [Bug 503034](https://bugs.launchpad.net/scour/+bug/503034) by only removing children of a group if the group itself has not been referenced anywhere else in the file + + +## Version 0.22 (2009-11-09) + +* Fix [Bug 449803](https://bugs.launchpad.net/scour/+bug/449803) by ensuring input and output filenames differ. +* Fix [Bug 453737](https://bugs.launchpad.net/scour/+bug/453737) by updated Inkscape's scour extension with a UI +* Fix whitespace collapsing on non-textual elements that had xml:space="preserve" +* Fix [Bug 479669](https://bugs.launchpad.net/scour/+bug/479669) to handle empty `<style>` elements. + + +## Version 0.21 (2009-09-27) + +* Fix [Bug 427309](https://bugs.launchpad.net/scour/+bug/427309) by updated Scour inkscape extension file to include yocto_css.py +* Fix [Bug 435689](https://bugs.launchpad.net/scour/+bug/435689) by properly preserving whitespace in XML serialization +* Fix [Bug 436569](https://bugs.launchpad.net/scour/+bug/436569) by getting `xlink:href` prefix correct with invalid SVG + + +## Version 0.20 (2009-08-31) + +* Fix [Bug 368716](https://bugs.launchpad.net/scour/+bug/368716) by implementing a really tiny CSS parser to find out if any style element have rules referencing gradients, filters, etc +* Remove unused attributes from parent elements +* Fix a bug with polygon/polyline point parsing if there was whitespace at the end + + +## Version 0.19 (2009-08-13) + +* Fix XML serialization bug: `xmlns:XXX` prefixes not preserved when not in default namespace +* Fix XML serialization bug: remapping to default namespace was not actually removing the old prefix +* Move common attributes to ancestor elements +* Fix [Bug 412754](https://bugs.launchpad.net/scour/+bug/401628): Elliptical arc commands must have comma/whitespace separating the coordinates +* Scour lengths for svg x,y,width,height,*opacity,stroke-width,stroke-miterlimit + + +## Version 0.18 (2009-08-09) + +* Remove attributes of gradients if they contain default values +* Reduce bezier/quadratic (c/q) segments to their shorthand equivalents (s/t) +* Move to a custom XML serialization such that `id`/`xml:id` is printed first (Thanks to Richard Hutch for the suggestion) +* Added `--indent` option to specify indentation type (default='space', other options: 'none', 'tab') + + +## Version 0.17 (2009-08-03) + +* Only convert to #RRGGBB format if the color name will actually be shorter +* Remove duplicate gradients +* Remove empty q,a path segments +* Scour polyline coordinates just like path/polygon +* Scour lengths from most attributes +* Remove redundant SVG namespace declarations and prefixes + + +## Version 0.16 (2009-07-30) + +* Fix [Bug 401628](https://bugs.launchpad.net/scour/+bug/401628): Keep namespace declarations when using `--keep-editor-data` (Thanks YoNoSoyTu!) +* Remove trailing zeros after decimal places for all path coordinates +* Use scientific notation in path coordinates if that representation is shorter +* Scour polygon coordinates just like path coordinates +* Add XML prolog to scour output to ensure valid XML, added `--strip-xml-prolog` option + + +## Version 0.15 (2009-07-05) + +* added `--keep-editor-data` command-line option +* Fix [Bug 395645](https://bugs.launchpad.net/scour/+bug/395645): Keep all identified children inside a defs (Thanks Frederik!) +* Fix [Bug 395647](https://bugs.launchpad.net/scour/+bug/395647): Do not remove closepath (Z) path segments + + +## Version 0.14 (2009-06-10) + +* Collapse adjacent commands of the same type +* Convert straight curves into line commands +* Eliminate last segment in a polygon +* Rework command-line argument parsing +* Fix bug in embedRasters() caused by new command-line parsing +* added `--disable-embed-rasters` command-line option + + +## Version 0.13 (2009-05-19) + +* properly deal with `fill="url("#foo")"` +* properly handle paths with more than 1 pair of coordinates in the first Move command +* remove font/text styles from shape elements (font-weight, font-size, line-height, etc) +* remove -inkscape-font-specification styles +* added `--set-precision` argument to set the number of significant digits (defaults to 5 now) +* collapse consecutive h,v coords/segments that go in the same direction + + +## Version 0.12 (2009-05-17) + +* upgraded enthought's path parser to handle scientific notation in path coordinates +* convert colors to #RRGGBB format +* added option to disable color conversion + + +## Version 0.11 (2009-04-28) + +* convert gradient stop offsets from percentages to float +* convert gradient stop offsets to integers if possible (0 or 1) +* fix bug in line-to-hv conversion +* handle non-ASCII characters (Unicode) +* remove empty line or curve segments from path +* added option to prevent style-to-xml conversion +* handle compressed svg (svgz) on the input and output +* added total time taken to the report +* Removed XML pretty printing because of [this problem](http://ronrothman.com/public/leftbraned/xml-dom-minidom-toprettyxml-and-silly-whitespace/). + + +## Version 0.10 (2009-04-27) + +* Remove path with empty d attributes +* Sanitize path data (remove unnecessary whitespace) +* Convert from absolute to relative path data +* Remove trailing zeroes from path data +* Limit to no more than 6 digits of precision +* Remove empty line segments +* Convert lines to horiz/vertical line segments where possible +* Remove some more default styles (`display:none`, `visibility:visible`, `overflow:visible`, + `marker:none`) + + +## Version 0.09 (2009-04-25) + +* Fix bug when removing stroke styles +* Remove gradients that are only referenced by one other gradient +* Added option to prevent group collapsing +* Prevent groups with title/desc children from being collapsed +* Remove stroke="none" + + +## Version 0.08 (2009-04-22) + +* Remove unnecessary nested `<g>` elements +* Remove duplicate gradient stops (same offset, stop-color, stop-opacity) +* Always keep fonts inside `<defs>`, always keep ids on fonts +* made ID stripping optional (disabled by default) + + +## Version 0.07 (2009-04-15) + +* moved all functionality into a module level function named 'scour' and began adding unit tests +* prevent metadata from being removed if they contain only text nodes +* Remove unreferenced pattern and gradient elements outside of defs +* Removal of extra whitespace, pretty printing of XML + + +## Version 0.06 (2009-04-13) + +* Prevent error when stroke-width property value has a unit +* Convert width/height into a viewBox where possible +* Convert all referenced rasters into base64 encoded URLs if the files can be found + + +## Version 0.05 (2009-04-07) + +* Removes unreferenced elements in a `<defs>` +* Removes all inkscape, sodipodi, adobe elements +* Removes all inkscape, sodipodi, adobe attributes +* Remove all unused namespace declarations on the document element +* Removes any empty `<defs>`, `<metadata>`, or `<g>` elements +* Style fix-ups: +* Fixes any style properties like this: `style="fill: url(#linearGradient1000) rgb(0, 0, 0);"` + * Removes any style property of: `opacity: 1;` + * Removes any stroke properties when `stroke=none` or `stroke-opacity=0` or `stroke-width=0` + * Removes any fill properties when `fill=none` or `fill-opacity=0` + * Removes all fill/stroke properties when `opacity=0` + * Removes any `stop-opacity: 1` + * Removes any `fill-opacity: 1` + * Removes any `stroke-opacity: 1` +* Convert style properties into SVG attributes diff --git a/Makefile b/Makefile index 2fb7802..532618a 100644 --- a/Makefile +++ b/Makefile @@ -1,15 +1,39 @@ all: clean install install: - python setup.py install + python3 setup.py install clean: rm -rf build rm -rf dist rm -rf scour.egg-info + rm -rf .tox + rm -f .coverage* + rm -rf htmlcov + find . -name "*.pyc" -type f -exec rm -f {} \; + find . -name "*__pycache__" -type d -prune -exec rm -rf {} \; publish: clean - python setup.py register - python setup.py sdist upload - python setup.py bdist_egg upload - python setup.py bdist_wininst upload + python3 setup.py register + python3 setup.py sdist upload + +check: test flake8 + + + +test: + python3 test_scour.py + +test_version: + PYTHONPATH=. python3 -m scour.scour --version + +test_help: + PYTHONPATH=. python3 -m scour.scour --help + +flake8: + flake8 --max-line-length=119 + +coverage: + coverage run --source=scour test_scour.py + coverage html + coverage report diff --git a/README.md b/README.md index 711de61..c5c0bc8 100644 --- a/README.md +++ b/README.md @@ -1,52 +1,72 @@ # Scour -Scour is a Python module that takes an input SVG and outputs a cleaner, -more concise SVG file. The goal is that authors will use this script after -editing the file in a GUI editor such as Inkscape or Adobe Illustrator. +[![PyPI](https://img.shields.io/pypi/v/scour.svg)](https://pypi.python.org/pypi/scour "Package listing on PyPI") +  +[![Build status](https://img.shields.io/travis/scour-project/scour.svg)](https://travis-ci.org/scour-project/scour "Build status (via TravisCI)") +[![Codecov](https://img.shields.io/codecov/c/github/scour-project/scour.svg)](https://codecov.io/gh/scour-project/scour "Code coverage (via Codecov)") -Scour was started as a vehicle for me to learn Python. In addition, the goal -is to reduce the amount of time I spend in cleaning up files I find on sites -such as openclipart.org +--- -Ideas are pulled from three places: +Scour is an SVG optimizer/cleaner written in Python that reduces the size of scalable vector graphics by optimizing structure and removing unnecessary data. - * my head - * Sam Ruby's SVG Tidy script: http://intertwingly.net/code/svgtidy/svgtidy.rb - * Inkscape's proposal for a 'cleaned SVG': http://wiki.inkscape.org/wiki/index.php/Save_Cleaned_SVG +It can be used to create streamlined vector graphics suitable for web deployment, publishing/sharing or further processing. -Regards, +The goal of Scour is to output a file that renders identically at a fraction of the size by removing a lot of redundant information created by most SVG editors. Optimization options are typically lossless but can be tweaked for more aggressive cleaning. -Jeff Schiller, 2009-04-06 +Scour is open-source and licensed under [Apache License 2.0](https://github.com/codedread/scour/blob/master/LICENSE). -codedread@gmail.com +Scour was originally developed by Jeff "codedread" Schiller and Louis Simard in in 2010. +The project moved to GitLab in 2013 an is now maintained by Tobias "oberstet" Oberstein and Patrick "Ede_123" Storz. -http://blog.codedread.com/ +This fork was created by Alexander Olsson ([alex@aleon.se](mailto:alex@aleon.se?subject=Scour)) at Aleon Apps. -http://www.codedread.com/scour/ +## Installation + +Scour requires [Python](https://www.python.org) 2.7 or 3.4+. Further, for installation, [pip](https://pip.pypa.io) should be used. + +To install this fork: + +```console +sudo make +``` + +## Extension +Place the modified extension files in the Inkscape extension directory +```console +sudo cp extension/* /usr/share/inkscape/extensions/ +``` ## Usage Standard: - scour -i mysvg.svg -o mysvg_opt.svg +```console +scour -i input.svg -o output.svg +``` -Better (this works in IE which needs Viewbox): +Better (for older versions of Internet Explorer): - scour -i mysvg.svg -o mysvg_opt.svg --enable-viewboxing +```console +scour -i input.svg -o output.svg --enable-viewboxing +``` -Maximum: +Maximum scrubbing: - scour -i mysvg.svg -o mysvg_opt.svg --enable-viewboxing --enable-id-stripping \ - --enable-comment-stripping --shorten-ids --indent=none +```console +scour -i input.svg -o output.svg --enable-viewboxing --enable-id-stripping \ + --enable-comment-stripping --shorten-ids --indent=none +``` -Maximum + Compress: +Maximum scrubbing and a compressed SVGZ file: - scour -i mysvg.svg -o mysvg_opt.svgz --enable-viewboxing --enable-id-stripping \ - --enable-comment-stripping --shorten-ids --indent=none +```console +scour -i input.svg -o output.svgz --enable-viewboxing --enable-id-stripping \ + --enable-comment-stripping --shorten-ids --indent=none +``` -## Notes +Remove scientific notation from path data: -Packaging from [sources](http://www.codedread.com/scour/) retrieved on 2013/20/22: +```console +scour -i input.svg -o output.svgz --nonsci-output +``` - * done by Tavendo GmbH, Tobias Oberstein - * license same as upstream (Apache 2.0) diff --git a/extension/output_scour.inx b/extension/output_scour.inx new file mode 100644 index 0000000..b6ce893 --- /dev/null +++ b/extension/output_scour.inx @@ -0,0 +1,132 @@ +<?xml version="1.0" encoding="UTF-8"?> +<inkscape-extension xmlns="http://www.inkscape.org/namespace/inkscape/extension"> + <name>Optimized SVG Output</name> + <id>org.inkscape.output.scour_inkscape</id> + + <param name="tab" type="notebook"> + <page name="Options" gui-text="Options"> + <param gui-text="Number of significant digits for coordinates:" + gui-description="Specifies the number of significant digits that should be output for coordinates. Note that significant digits are *not* the number of decimals but the overall number of digits in the output. For example if a value of "3" is specified, the coordinate 3.14159 is output as 3.14 while the coordinate 123.675 is output as 124." + name="set-precision" type="int" min="1">5</param> + <spacer/> + <param gui-text="Shorten color values" + gui-description="Convert all color specifications to #RRGGBB (or #RGB where applicable) format." + name="simplify-colors" type="bool">true</param> + <param gui-text="Convert CSS attributes to XML attributes" + gui-description="Convert styles from style tags and inline style="" declarations into XML attributes." + name="style-to-xml" type="bool">true</param> + <spacer/> + <param gui-text="Collapse groups" + gui-description="Remove useless groups, promoting their contents up one level. Requires "Remove unused IDs" to be set." + name="group-collapsing" type="bool">true</param> + <param gui-text="Create groups for similar attributes" + gui-description="Create groups for runs of elements having at least one attribute in common (e.g. fill-color, stroke-opacity, ...)." + name="create-groups" type="bool">true</param> + <spacer/> + <param gui-text="Keep editor data" + gui-description="Don't remove editor-specific elements and attributes. Currently supported: Inkscape, Sodipodi and Adobe Illustrator." + name="keep-editor-data" type="bool">false</param> + <param gui-text="Remove scientific notation" + gui-description="Remove scientific notation from path data." + name="nonsci-output" type="bool">false</param> + <param gui-text="Keep unreferenced definitions" + gui-description="Keep element definitions that are not currently used in the SVG" + name="keep-unreferenced-defs" type="bool">false</param> + <spacer/> + <param gui-text="Work around renderer bugs" + gui-description="Works around some common renderer bugs (mainly libRSVG) at the cost of a slightly larger SVG file." + name="renderer-workaround" type="bool">true</param> + </page> + <page name="Output" gui-text="SVG Output"> + <label appearance="header">Document options</label> + <param gui-text="Remove the XML declaration" + gui-description="Removes the XML declaration (which is optional but should be provided, especially if special characters are used in the document) from the file header." + name="strip-xml-prolog" type="bool">false</param> + <param gui-text="Remove metadata" + gui-description="Remove metadata tags along with all the contained information, which may include license and author information, alternate versions for non-SVG-enabled browsers, etc." + name="remove-metadata" type="bool">false</param> + <param gui-text="Remove comments" + gui-description="Remove all XML comments from output." + name="enable-comment-stripping" type="bool">false</param> + <param gui-text="Embed raster images" + gui-description="Resolve external references to raster images and embed them as Base64-encoded data URLs." + name="embed-rasters" type="bool">true</param> + <param gui-text="Enable viewboxing" + gui-description="Set page size to 100%/100% (full width and height of the display area) and introduce a viewBox specifying the drawings dimensions." + name="enable-viewboxing" type="bool">false</param> + <spacer/> + <label appearance="header">Pretty-printing</label> + <param gui-text="Format output with line-breaks and indentation" + gui-description="Produce nicely formatted output including line-breaks. If you do not intend to hand-edit the SVG file you can disable this option to bring down the file size even more at the cost of clarity." + name="line-breaks" type="bool">true</param> + <param gui-text="Indentation characters:" + gui-description="The type of indentation used for each level of nesting in the output. Specify "None" to disable indentation. This option has no effect if "Format output with line-breaks and indentation" is disabled." + name="indent" type="optiongroup" appearance="combo"> + <option value="space">Space</option> + <option value="tab">Tab</option> + <option context="Indent" value="none">None</option> + </param> + <param gui-text="Depth of indentation:" + gui-description="The depth of the chosen type of indentation. E.g. if you choose "2" every nesting level in the output will be indented by two additional spaces/tabs." + name="nindent" type="int">1</param> + <param gui-text="Strip the "xml:space" attribute from the root SVG element" + gui-description="This is useful if the input file specifies "xml:space='preserve'" in the root SVG element which instructs the SVG editor not to change whitespace in the document at all (and therefore overrides the options above)." + name="strip-xml-space" type="bool">false</param> + </page> + <page name="IDs" gui-text="IDs"> + <param gui-text="Remove unused IDs" + gui-description="Remove all unreferenced IDs from elements. Those are not needed for rendering." + name="enable-id-stripping" type="bool">true</param> + <spacer/> + <param gui-text="Shorten IDs" + gui-description="Minimize the length of IDs using only lowercase letters, assigning the shortest values to the most-referenced elements. For instance, "linearGradient5621" will become "a" if it is the most used element." + name="shorten-ids" type="bool">false</param> + <param gui-text="Prefix shortened IDs with:" + gui-description="Prepend shortened IDs with the specified prefix." + name="shorten-ids-prefix" type="string"></param> + <spacer/> + <param gui-text="Preserve manually created IDs not ending with digits" + gui-description="Descriptive IDs which were manually created to reference or label specific elements or groups (e.g. #arrowStart, #arrowEnd or #textLabels) will be preserved while numbered IDs (as they are generated by most SVG editors including Inkscape) will be removed/shortened." + name="protect-ids-noninkscape" type="bool">true</param> + <param gui-text="Preserve the following IDs:" + gui-description="A comma-separated list of IDs that are to be preserved." + name="protect-ids-list" type="string"></param> + <param gui-text="Preserve IDs starting with:" + gui-description="Preserve all IDs that start with the specified prefix (e.g. specify "flag" to preserve "flag-mx", "flag-pt", etc.)." + name="protect-ids-prefix" type="string"></param> + </page> + <page name="About" gui-text="About"> + <hbox> +  + <spacer/> + <vbox> + <spacer/> + <label>Optimized SVG Output is provided by</label> + <label appearance="header" indent="1">Scour - An SVG Scrubber</label> + <spacer/> + <label>For details please refer to</label> + <label appearance="url" indent="1">https://github.com/scour-project/scour</label> + </vbox> + </hbox> + <spacer size="expand"/> + <hbox> + <label>This version of the extension is designed for</label> + <label>Scour 0.31+</label> + </hbox> + <param name="scour-version" type="string" gui-hidden="true">0.31</param> <!-- this parameter is checked programmatically in the extension to show a warning --> + <param gui-text="Show warnings for older versions of Scour" + name="scour-version-warn-old" type="bool">true</param> + </page> + </param> + + <output> + <extension>.svg</extension> + <mimetype>image/svg+xml</mimetype> + <filetypename>Optimized SVG (*.svg)</filetypename> + <filetypetooltip>Scalable Vector Graphics</filetypetooltip> + </output> + + <script> + <command location="inx" interpreter="python">output_scour.py</command> + </script> +</inkscape-extension> diff --git a/extension/output_scour.py b/extension/output_scour.py new file mode 100644 index 0000000..eebfb8a --- /dev/null +++ b/extension/output_scour.py @@ -0,0 +1,100 @@ +#!/usr/bin/env python +""" +Run the scour module on the svg output. +""" + + +import inkex +from inkex.localization import inkex_gettext as _ + +try: + from packaging.version import Version +except ImportError: + raise inkex.DependencyError( + _( + """Failed to import module 'packaging'. +Please make sure it is installed (e.g. using 'pip install packaging' +or 'sudo apt-get install python3-packaging') and try again. +""" + ) + ) + +try: + import scour + from scour.scour import scourString +except ImportError: + raise inkex.DependencyError( + _( + """Failed to import module 'scour'. +Please make sure it is installed (e.g. using 'pip install scour' + or 'sudo apt-get install python3-scour') and try again. +""" + ) + ) + + +class ScourInkscape(inkex.OutputExtension): + """Scour Inkscape Extension""" + + # Scour options + def add_arguments(self, pars): + pars.add_argument("--tab") + pars.add_argument("--simplify-colors", type=inkex.Boolean, dest="simple_colors") + pars.add_argument("--style-to-xml", type=inkex.Boolean) + pars.add_argument( + "--group-collapsing", type=inkex.Boolean, dest="group_collapse" + ) + pars.add_argument("--create-groups", type=inkex.Boolean, dest="group_create") + pars.add_argument("--enable-id-stripping", type=inkex.Boolean, dest="strip_ids") + pars.add_argument("--shorten-ids", type=inkex.Boolean) + pars.add_argument("--shorten-ids-prefix") + pars.add_argument("--embed-rasters", type=inkex.Boolean) + pars.add_argument( + "--keep-unreferenced-defs", type=inkex.Boolean, dest="keep_defs" + ) + pars.add_argument("--keep-editor-data", type=inkex.Boolean) + pars.add_argument("--nonsci-output", type=inkex.Boolean) + pars.add_argument("--remove-metadata", type=inkex.Boolean) + pars.add_argument("--strip-xml-prolog", type=inkex.Boolean) + pars.add_argument("--set-precision", type=int, dest="digits") + pars.add_argument("--indent", dest="indent_type") + pars.add_argument("--nindent", type=int, dest="indent_depth") + pars.add_argument("--line-breaks", type=inkex.Boolean, dest="newlines") + pars.add_argument( + "--strip-xml-space", type=inkex.Boolean, dest="strip_xml_space_attribute" + ) + pars.add_argument("--protect-ids-noninkscape", type=inkex.Boolean) + pars.add_argument("--protect-ids-list") + pars.add_argument("--protect-ids-prefix") + pars.add_argument("--enable-viewboxing", type=inkex.Boolean) + pars.add_argument( + "--enable-comment-stripping", type=inkex.Boolean, dest="strip_comments" + ) + pars.add_argument("--renderer-workaround", type=inkex.Boolean) + + # options for internal use of the extension + pars.add_argument("--scour-version") + pars.add_argument("--scour-version-warn-old", type=inkex.Boolean) + + def save(self, stream): + # version check if enabled in options + if self.options.scour_version_warn_old: + scour_version = scour.__version__ + scour_version_min = self.options.scour_version + if Version(scour_version) < Version(scour_version_min): + raise inkex.AbortExtension( + f""" +The extension 'Optimized SVG Output' is designed for Scour {scour_version_min} or later but you're + using the older version Scour {scour_version}. + +Note: You can permanently disable this message on the 'About' tab of the extension window.""" + ) + del self.options.scour_version + del self.options.scour_version_warn_old + + # do the scouring + stream.write(scourString(self.svg.tostring(), self.options).encode("utf8")) + + +if __name__ == "__main__": + ScourInkscape().run() diff --git a/extension/output_scour.svg b/extension/output_scour.svg new file mode 100644 index 0000000..8f1b941 --- /dev/null +++ b/extension/output_scour.svg @@ -0,0 +1,5 @@ +<svg width="100" height="100" version="1.1" xmlns="http://www.w3.org/2000/svg"> + <path d="m84.5 51.5-12.6 0.504c-8.9e-4 -0.00454-0.00106-0.00914-0.00195-0.0137-0.00104 0.00426-9.21e-4 0.0094-0.00195 0.0137l-15.5 0.623c-0.0092-0.0168-0.016-0.0361-0.0254-0.0527-0.00207 0.0177-0.0019 0.037-0.00391 0.0547l-1.95 0.0781c-0.0314-0.0743-0.0854-0.127-0.186-0.133v0.141l-7.76 0.311c-0.0111-0.0184-0.02-0.0426-0.0312-0.0605 0.00141 0.0201 0.00632 0.0423 0.00781 0.0625l-26.5 1.06-3.74 6.81 2.86 1.36 1.02 1.43 0.887 0.271 1.56 0.41 0.273-0.479 0.953-0.135h2.72l1.29 0.0684 1.22-0.682 0.48 0.525 0.609 0.223 0.887-0.203 1.02-0.205 0.406 0.381 1.3-0.176 1.84-0.273 0.953-0.271 1.02 0.0684 1.09 0.137 0.682 0.0664 0.408-0.477 0.691 0.537 2.1-0.537 1.36 0.273 1.16-0.137 1.8-0.184 0.24-0.361 3.4 0.408 2.18-0.816 1.09 0.408 2.45-0.613h4.46l0.445-0.34 1.02-0.205 0.953 0.137 1.29-0.34 1.34 0.107c0.00197 0.0173 0.00182 0.0397 0.00391 0.0566 0.0223-0.0242 0.0276-0.032 0.0469-0.0527l1.13 0.0918 1.36 0.205 0.387-0.439 2.95 0.234 0.748 0.156 0.584-0.195 1.39-0.369 2.04 0.477 0.848 0.283 1.4-0.623 0.547 0.232 0.543-0.369 5.88 0.369 0.656-0.0957-1.7-9.74zm-69.2 0.297c-0.493-0.0229-0.788 1.14-0.584 1.17 0.186-0.398 0.398-0.77 0.584-1.17zm1.75 0.584c-0.342 0.826-0.871 1.47-1.17 2.34 0.626 0.12 0.267-0.702 0.779-0.195-5e-3 -0.727 0.76-1.69 0.391-2.14zm-0.779 0.193c-0.547 1.33-1.42 2.35-1.95 3.7 1.01-0.744 1.76-2.78 1.95-3.7zm-0.465 0.148c-0.126-0.05-0.42 0.311-0.119 0.631 0.211-0.423 0.195-0.601 0.119-0.631zm-1.87 0.0469c-0.0439 0.426-0.88 1.05-0.391 1.36-0.0281-0.498 0.863-1.06 0.391-1.36zm71.3 0.232c-0.059-0.034-0.0993 0.0425-0.0879 0.352 1.16 2.15 1.33 5.28 2.72 7.2-0.562-2.55-1.39-4.84-2.34-7.01-0.052-0.124-0.202-0.49-0.301-0.547zm-67.6 0.742c-0.835 1.07 0.486 0.719 0 0zm1.27 0.121c-0.0209-0.00292-0.052 0.0184-0.0977 0.0723-0.281 0.726-0.26 0.492-0.584 0 0.151 0.515-0.72 1.51-0.195 1.17 0.0412-0.218 0.215-0.302 0.389-0.389 0.116 0.397-0.365 1.07 0 0.584 0.226-0.169 0.635-1.42 0.488-1.44zm-1.46 1.05c-0.513-0.0589-0.639 0.268-0.584 0.777 0.513 0.0589 0.639-0.268 0.584-0.777zm-2.21 1.29c-0.0245 2.56e-4 -0.0629 0.0204-0.121 0.0664-0.041 0.367-0.782 1.01-0.391 1.17 0.113-0.242 0.683-1.24 0.512-1.24z" fill="#f6e7a1"/> + <path d="m78.9 35.6c-0.411-0.056-0.94 0.163-1.36 0.195-6.84 0.524-14.7 0.699-22.8 1.17-14 0.812-28.8 1.07-41.1 2.14-0.0167 2.54 0.323 3.93 0.195 6.81 0.657 0.901 1.29 1.83 1.56 3.12-1.3 2.27-2.86 4.28-4.28 6.43 0.811 1.33 1.81 2.47 2.73 3.7 1.55-0.0584 1.65 1.33 2.53 1.95 0.809-0.614 1.24-2.09 1.56-2.73 0.662-1.32 1.18-2.69 1.95-3.7-0.412 1.49-0.95 2.35-1.56 3.5-0.827 1.57-2.33 3.59-0.777 4.48 1.15-0.244 1.44 0.493 2.53 1.17 0.109-1.58 1.13-3.28 0.973-4.48-0.574 1.18-0.783 2.72-1.75 3.51 0.872-2.47 1.2-4.42 2.14-6.62 0.211 0.499-0.592 0.93 0 1.17 0.302-0.866 0.392-1.95 1.17-2.34-0.356 3.75-1.28 5.82-2.34 8.96 0.762-0.193 1.13 0.43 1.56 0 0.146-3.74 0.93-6.28 1.56-9.15 0.23 0.381-0.181 1.8-0.195 2.92-0.0045 0.355 0.196 2.8 0 1.17-0.128-1.07-0.163 0.525-0.193 0.779-0.191 1.6-0.63 2.93-0.779 4.48 0.613 0.0939 0.441-0.597 1.17-0.389 0.122-0.592-0.248-1.68 0.195-1.95 0.155 0.754-0.497 2.31 0.779 1.95-0.972-0.939-0.114-2.68-0.584-3.89-0.0436 0.151-0.00608 0.385-0.195 0.391 0.321-1.38-0.0526-3.73 0.584-5.26 0.333 2.26 0.298 5.71 0 7.98 0.701-0.117 0.574 0.594 1.17 0.584 0.371-3.13 0.502-6.51 1.56-8.96 0.322 2.8-1.19 5.9-0.777 9.15 0.757 0.269 2.74 0.24 2.91-0.908 0.00126-0.0852 0.00253-0.171 0.00781-0.26 0.00732 0.0922 4e-3 0.179-0.00781 0.26-0.00558 0.376 0.0493 0.69 0.396 0.713-0.031-1.27 0.26-2.85-0.777-3.12-0.156 0.805 0.706 0.592 0.584 1.36-0.307-0.0176-0.152-0.496-0.584-0.389-0.121 0.529 0.247 1.55-0.195 1.75-0.0605-1.89 0.0118-0.482-0.584 0 0.434-2.81 0.119-6.37 0.973-8.76 0.328 1.95-0.583 4.53 0.195 5.65 0.129-1.95-0.258-4.41 0.195-6.04 0.401 1.03 0.248 2.61 0.584 3.7 0.264-0.948-0.134-2.05 0.389-2.34-0.0319 1.71 0.101 2.41 0.195 3.7 0.153 0.804 0.492-1.81 0.195-2.14 0.878 0.472 0.0244-2.25 0.973-2.34-0.123 1.96-0.463 4.41-0.584 7.59-0.57 0.284-0.947 1.4 0.195 1.36 0.348-3.29 0.287-6.98 1.36-9.54-0.148 3.42-0.457 6.68-1.17 9.54 0.55-0.161 0.506 0.275 0.975 0.195 0.284-0.874-0.131-1.64 0-2.34 0.0441-0.235 0.375-0.192 0.389-0.391-5e-3 0.0753-0.367-0.157-0.389-0.389-0.109-1.19 0.556-1.5 0.584-1.75-0.00214-0.257-0.342-0.179-0.391-0.391 0.956 0.0474-0.0676-1.88 0.584-2.14 0.0329 0.291-0.069 0.719 0.195 0.779 0.397-1.03-0.163-3.02 0.584-3.7 0.076 3.4-0.482 6.49-0.391 9.35-0.257-0.00262-0.177-0.342-0.389-0.391-0.141 0.443-0.175 0.995-0.391 1.36 1.01-0.384 1.56 0.0746 2.53-0.389 0.428-0.553-0.352-1.79-0.389-2.14-0.0591-0.572 0.192-7 0.389-7.01 0.133-0.00477-0.293 2.11 0.195 0.973 0.19-0.442-0.276-1.71 0.195-1.56 0.0429 1.99 0.383 4.16 0.584 5.45 0.197 1.26-0.156 2.78 0.584 3.89 0.0956-0.874-0.0993-1.46-0.195-2.14 0.356 0.0765 0.25 0.258 0.389-0.195 0.494-1.61 0.319-5.87 1.17-7.01 0.0164 2.03-0.489 3.53-0.391 5.64 0.597-0.931 0.19-4.27 0.975-5.26-0.291 2.21 0.367 4.42-0.195 6.23-0.0303 0.0968-0.303-0.37-0.389 0.195-0.163 1.08-0.417 2.54 0.193 2.92 0.53 0.0403 0.365-0.779 0.391-1.17 0.165-2.55 0.42-5.85 1.17-8.18 0.153 2.03-0.506 3.51 0.193 4.87 0.131-2.22-0.28-4.04 0.391-5.45 0.0396 0.368-0.131 1.92 0.195 1.36 0.106-0.283-0.229-1.01 0.193-0.975 0.774 0.411 0.36 2.06-0.193 2.34 0.487-0.0243 0.608-0.373 0.777 0.193 0.109 0.63 0.654 1.26 0.779 2.14 0.218 1.53-0.458 3.66 0.195 4.67 0.116-1.28 0.28-4.49 0.344-6.29-0.053 0.155-0.0894-2.05 0.24-2.08 0.00405-4.77e-4 -0.0488 0.938 0.193 0.777 0.0276-0.751-0.151-1.71 0.391-1.95-0.261 3.24-0.207 6.8-0.975 9.54 0.403-0.116 1.28 0.242 1.36-0.193-0.706-0.975 0.362-2.88-0.195-3.7-0.138 0.705-0.0731 1.61-0.389 2.14 0.24-2.62 0.0955-5.62 0.779-7.79 0.55 0.195 0.488 0.284 0.779 0.389-0.127 3.18 0.0117 6.63-0.779 9.15 1.49 0.29 0.922-1.21 0.973-1.95 0.152-2.23 0.239-5.84 0.779-8.18 0.361 3.38 1.31 6.03 0.779 9.73 1.01-0.558 2.32 0.301 3.7 0.195 1.09-0.0834 1.92-1.06 2.73-0.584 0.111-0.342-0.237-1.15 0.193-1.17 0.424 0.407-0.097 0.557 0 1.17 2.81-0.0119 3.95-0.632 7.21-0.584v-3.51c-0.118-1.01-0.281 0.393-0.195 0.779v1.95c-0.613-0.631-0.145 0.507-0.584 0.584-0.0713-0.523 0.178-2.37-0.195-1.95 0.0272 0.806-0.0531 1.5-0.389 1.95 0.227-3.42 0.271-5.83 1.17-8.57 0.256 0.603-0.577 1.08 0 1.36 0.108-0.347 0.0996-0.808 0.389-0.973 0.214 0.992-0.53 1.73 0.195 2.14 0.127-1.04-0.257-2.59 0.193-3.31 0.162 1.71 0.488 4.13 0.586 6.23 0.0503 1.09-0.572 3.31 0.779 3.12-0.0305-0.866-1.06-1.45-0.391-3.12 0.107 0.283-0.227 1.01 0.195 0.973 0.361-1.5-0.488-2.84-0.584-4.09-0.0388-0.503-0.321-1.56 0.389-1.95-0.109 1.79 0.751 4.14 0.973 6.23 0.452-2.14 0.324-4.87 0.975-6.81 0.167 3.03-0.492 4.41-0.391 7.59-0.766-0.312-0.0675 0.84-0.584 0.777-0.0946-0.23-0.207-0.441-0.389-0.584-0.186 1.97 2.86 1.09 3.31 0.584 0.715 0.577 1.15 0.235 2.34 0.195-0.245-3.96-1.14-5.75-0.779-8.96 0.0117 1.61 0.384 2.86 0.779 4.09-0.0255 0.256-0.0307 1.41 0.193 0.779 0.0245-1.34 0.0891-2.64 0.391-3.7-0.0593 3.12-0.362 5.63-0.391 7.4 1.35 0.649 2.3 0.498 3.51 0.389-0.487-1.91 0.135-4.77 0.389-6.04 0.247-1.23-0.171-2.63 0.584-3.51 0.546 1.55 0.854 3.89 0.975 5.84 0.0355 0.57 0.703 1.52-0.195 1.75-0.081-1.19-0.141-2.91-0.584-4.48 0.2 2.39-0.0405 3.13 0 5.84-0.342-0.28-0.443-0.179-0.389 0.389 0.444 0.619 1.03-0.252 1.56-0.195 0.0956-0.614-0.351-0.686-0.195-1.36 0.316 0.0553 0.305 0.286 0.391-0.195 0.429-2.41-0.329-5.39 0.779-6.62 0.0953 3.83-0.0694 5.11-0.391 8.57 0.783-0.134 1.11 0.187 1.75 0.195 0.431-1.83-0.61-5.63 0-8.18 0.164 0.745 0.255 1.56 0.195 2.53 1.13-0.587-0.379-2.77 0.584-3.31 0.208 3.28-0.0247 6.02-0.389 8.96 0.665-0.0493 0.935-0.494 1.75-0.391-0.324-2.96-0.751-6.09-0.195-8.96 0.349 2.38 1.64 4.85 1.17 7.4-0.286-1.92-0.636-3.78-1.17-5.45 0.308 2.47 0.384 4.63 0.391 7.01 2.58-0.19 4.01 0.398 6.42 0-0.0322-2.86-1.44-5.51-0.584-7.59 0.631 3.03 0.974 4.95 0.779 7.59 1.03-0.0217 1.65-0.39 3.12-0.389-0.406-3.49-2.4-7.07-3.5-10.5 0.471-0.373 1.56-0.123 2.34-0.195 1.74-3.19 0.693-6.95-1.56-9.73-1.31-1.62-5.37-4.62-7.2-4.87zm0.973 0.779c-2.09 0.742-4.43 0.482-6.62 0.584-14.6 0.679-30.9 1.71-45 2.34 10.3-0.00382 20.6-0.925 31-1.36 3.42-0.144 6.89-0.506 10.1-0.389 3.75 0.136-1.31 0.358-1.95 0.389-11.3 0.535-24.1 1.06-35.2 1.56 16.2-0.105 33.1-1.36 48.9-2.14-0.555 0.774-1.59 0.333-2.34 0.391-11.8 0.904-24.2 1.08-35 1.75 13.2 0.0322 25.6-1.19 38.9-1.56-11.2 1.06-23.3 1.14-34.7 1.95 10.8-0.0257 22.1-1.08 33.7-1.36 0.348-0.0551 0.398 0.185 0.193 0.195-1.72-0.00667-2.6 0.106-4.48 0.195 1.9 0.601 4.1-0.178 6.23 0-16.1 1.32-32.5 1.51-48.1 2.72 14.9-0.175 30.6-1.86 45.2-2.14 0.594-0.0114 3.23-0.102 2.34 0.193-15.8 0.773-33 1.47-49.3 2.53 15.5-0.627 29.6-1.22 43.8-1.95 2.27-0.117 4.7-0.654 6.62-0.389 0.334 0.0462 0.411 0.107 0.195 0.195-22.8 1.21-44.8 2.25-67.8 3.31 4.55-0.786 9.43-0.404 14.4-0.973-4.56-0.207-10.1 0.295-14.8 0.584 2.02-0.708 4.94-0.516 7.59-0.584-0.819-0.569-2.47-0.268-3.89-0.195-1.45 0.0739-2.99 0.154-4.09 0.391 0.245-0.759 1.26-0.339 1.75-0.391 7.91-0.827 18.6-0.949 27.5-1.36 9.98-0.464 20.2-1.26 30.2-1.56-16.5 0.356-34.8 1.36-51.2 2.14-0.945 0.0448-2.58 0.597-3.31 0 0.011 0.00882 0.202-0.119-0.193-0.195-1.49-0.286-3.36 0.235-4.87 0.389-0.151-0.753 0.972-0.359 1.17-0.389 7.4-1.14 17.9-0.654 26.5-1.36-0.956-0.526-2.2-0.253-3.31-0.195-8.28 0.434-16.8 0.947-24.3 1.17 2.49-0.689 6.09-0.273 8.76-0.779-1.32-0.218-4.18 0.0249-6.23 0.193 0.562-0.759 1.75-0.343 2.53-0.389 2.34-0.136 5.07-0.341 7.4-0.389-4.46-0.394-8.38 0.32-12.9 0.389 1.47-0.409 2.58-0.499 4.09-0.584 19.9-1.12 40.7-1.51 60.5-2.92zm0.195 0.584c-2.41 0.769-6.07 0.294-8.76 0.779 0.216-0.752 1.79-0.304 2.92-0.391 1.41-0.108 4.16-0.291 5.84-0.389zm-2.92 1.95c-0.861 0.288-1.4-0.265-1.75 0.389 0.63-0.0839 1.68 0.257 1.75-0.389zm-45.2 0.584c-0.723 0.315-1.91 0.16-2.92 0.193-0.331-0.0436-0.482 0.22 0 0.195 0.883-0.22 2.67 0.465 2.92-0.389zm-4.22 0.223c-0.195 0.00524-0.442 0.0981-0.643 0.166h-1.56c-2.37 0.198 1.66 0.235 2.53 0.195 0.00703-0.284-0.135-0.367-0.33-0.361zm19.6 0.125c-0.673-0.00186 0.00284 0.561 0.377 0.041-0.156-0.0281-0.281-0.0408-0.377-0.041zm-0.598 0.041c-7.59 0.391-15.8 0.914-23.6 1.17-0.49 0.016-1.46-0.355-1.75 0.389 5.72-0.283 12.1-0.709 17.5-0.973 2.68-0.13 5.34 0.151 7.79-0.584zm-29.4 0.154c0.318 0.0103 0.294 0.0844-0.605 0.234-0.471 0.0784-0.476 0.266-1.36 0.195-0.0435-0.286 1.44-0.447 1.97-0.43zm66.6 0.408c0.246 0.0191 0.534 0.0775 0.705 0.0215-0.0243 0.43-0.827 0.0835-1.17 0.195 0.0122-0.215 0.219-0.236 0.465-0.217zm-0.854 0.0215c-3.08 0.722-6.23 0.25-9.35 0.779 2.61 0.466 5.3-0.486 8.76-0.195 1.79 0.15-0.0346 0.166-0.391 0.195-7.82 0.652-18.1 0.974-26.5 1.36-12.3 0.572-26.2 0.999-37.6 1.95 16.3-0.727 33.8-1.47 50.8-2.34 3.53-0.179 10.7-1.06 15.8-0.777 2.54 0.14-2.54 0.306-3.7 0.389-20.3 1.45-41.7 1.64-62.5 3.12 0.73 0.54 2.84-0.0633 3.31 0.195-2.28 0.719-3.56 0.183-4.87-1.17 5.37-0.75 10.9-0.938 16.4-1.17 16.5-0.697 33-1.52 49.8-2.34zm1.15 0.367c0.28 0.0203 0.599 0.0814 0.801 0.0234-0.0841 0.435-0.96 0.0775-1.36 0.193 0.0419-0.218 0.283-0.237 0.562-0.217zm-10.7 0.412c-14.7 0.589-27.4 1.3-42.8 1.95-4.36 0.184-8.77 0.295-13.2 0.973 18.2-1.04 37.3-1.57 56.1-2.92zm-38.7 0.193c-3.72 0.333-7.5 0.398-10.9 0.584 2.48 0.442 6.41-0.0578 8.76-0.193 0.819-0.0474 2.08 0.366 2.53-0.391h-0.391zm-15.4 0.00195c-0.568 0.859-2.75 0.11-3.7 0.584 0.201-0.453 2.6-0.385 3.7-0.584zm64.4 0.566c1.14 0.0312 1.69 0.179 0.0918 0.211-1.57 0.0315-4.85 0.365-6.04 0.195 1.26-0.43 3.34-0.0328 4.67-0.389 0.45-0.0242 0.893-0.028 1.27-0.0176zm-51.5 0.211c-2.45 0.0491 0.0431 0.467 1.17 0-0.443 0.0072-0.703-0.0093-1.17 0zm53.7 0.195c-21.5 1.66-43.8 2.03-66.6 3.7 0.766-0.827 2.46-0.5 4.09-0.584 20.1-1.03 40.5-2.13 62.5-3.12zm-13.2 0.195c0.158-0.0076 4.42 0.0822 4.67 0-1.61 0.535-5.47 0.107-7.2 0.193-11.7 0.583-26.9 1.4-40.3 2.14-2.02 0.112-6.48 0.578-7.01 0.195 0.0527 0.0381 4.46-0.316 6.04-0.391 6.14-0.29 12.6-0.649 18.9-0.973 9.15-0.47 17.1-0.797 24.9-1.17zm12.9 0.287c0.248-0.017 0.429 0.0328 0.387 0.297-19.4 0.681-38.6 2.08-58.4 2.92-2.18 0.0918-4.11 0.111-6.23 0.389-0.454 0.0596-0.909 0.322-1.36 0 0.238-0.406 0.949-0.356 1.36-0.389 9.57-0.75 20.2-1.02 29.8-1.56 10.2-0.563 20.6-1.03 31.2-1.56 0.273-0.0138 1.4 0.0067 2.53 0 0.219-0.00143 0.533-0.0846 0.781-0.102zm1.36 0.49c-10 0.993-20.6 1.22-30.8 1.75-9.64 0.505-19.5 0.99-28.8 1.56-0.104-0.771 0.952-0.368 1.17-0.389 8.21-0.77 18.4-0.994 27.1-1.36 10.5-0.446 21-1.14 31.3-1.56zm-0.162 0.574c0.358 9.23e-4 0.673 0.167 0.746 0.789-0.149 0.823-1.14 0.346-1.56 0.389-3.91 0.393-9.9 0.556-14.6 0.779-16.4 0.779-34.7 2.11-50.4 2.34 0.285-1.26 2.49-0.874 3.7-0.973 8.83-0.723 18.9-0.919 28.4-1.36 7.84-0.365 15.7-0.993 24.1-1.36 2.9-0.128 5.88 0.0632 8.57-0.389 0.247-0.0416 0.648-0.206 1.01-0.205zm-3.34 0.594c-0.305 0.815 1.19 0.213 0.391 0-0.0412 0.235-0.217 0.12-0.391 0zm2.14 0c-0.291 0.0331-0.719-0.069-0.779 0.195 0.291-0.0334 0.719 0.069 0.779-0.195zm-58.8 1.75c-0.446 0.765-1.63 0.328-2.53 0.389-0.969 0.066-1.84 0.343-2.73 0.195-1.45-0.15 1.47-0.353 1.95-0.391 0.855-0.0665 2.52-0.0499 3.31-0.193zm60 0.348c0.205 8.05e-4 0.455 0.0127 0.752 0.041-1.47 0.505-2.19-0.0466-0.752-0.041zm-3.85 0.211c0.575 0.0234 1.19 0.0887 1.68 0.0254-0.654 0.45-2.14 0.067-3.12 0.193 0.327-0.225 0.863-0.242 1.44-0.219zm-2.72 0.182c0.188 8.7e-5 0.42 0.00994 0.699 0.0371-1.22 0.509-2.02-0.0377-0.699-0.0371zm-2.05 0.0137c0.346 0.0217 0.732 0.0837 0.996 0.0234-0.208 0.442-1.22 0.075-1.75 0.195 0.104-0.221 0.41-0.24 0.756-0.219zm6.95 0.18c0.605 0.0175 1.17 0.0805 1.64 0.234-2.31 0.0217-4.55 0.118-6.62 0.389 0.622-0.51 2.06-0.534 3.12-0.584 0.616-0.0293 1.26-0.0566 1.86-0.0391zm-10.1 0.0332c0.472-0.00705 0.951-0.0057 1.44 0.00586-2.42 0.249-8.48 0.82-11.1 0.389 3.44 0.0938 6.36-0.345 9.66-0.395zm11.6 0.348c0.0603-0.00636 0.115 0.00558 0.156 0.0469 0.134 0.134-0.768 0.537-0.584 0.195 0.0216-0.0399 0.247-0.223 0.428-0.242zm-8.21 0.0469c0.515-0.0102 1.06 0.0076 1.56 0-1.29 0.461-4.48 0.0586-1.56 0zm7.4 0c0.396 0.72-1.49 0.341-2.34 0.391-1.54 0.0903-3.9 0.385-5.84 0.389h-1.36c-0.849 0.00429-0.0334-0.336 0.389-0.195h1.36c2.39-0.396 5.43-0.153 7.79-0.584zm-24.1 0.188c0.666-0.0157 1.4 0.0365 2.05 0.00781 2.52 0.0686-0.542 0.303-1.56 0.195-0.691 0.117-0.98 0.215-2.14 0.193 0.383-0.298 0.983-0.381 1.65-0.396zm10.6 0.00781c-0.664 0.755-2 0.339-3.12 0.391-0.959 0.0446-2.16 0.369-3.12 0.193 2.25 0.663 7.04-0.279 9.54 0-3.35 0.35-6.93 0.469-10.5 0.584 0.801 0.562 3.53 0.0739 5.06 0 5.06-0.243 11.1-0.328 16.2-0.777-1.05 0.752-2.37 0.339-3.5 0.389-15 0.659-28.2 1.64-43.6 2.34-6.1 0.277-12.9 0.347-17.9 0.973-0.421-0.299-0.997-2.28-0.195-2.53-0.716 0.466 1.17 0.791-0.193 0.777 1.19 0.855 6.22-0.584 7.98 0-0.553-0.184-1.05 0.453-2.92 0.195-1.4 0.357-3.49 0.0133-4.87 0.389 14.6-0.151 29.5-1.48 44.8-1.95-0.832-0.667-4.3 0.156-5.45-0.193 3.21-0.449 7.82-0.311 11.9-0.779zm1.75 0c0.609-0.0098 1.17 0.0074 1.75 0-1.47 0.466-5.05 0.0529-1.75 0zm-15.5 0.146c0.533 0 1.07 0.0806 1.27 0.242-1.05-0.124-1.94 0.216-2.53 0 0.2-0.162 0.733-0.242 1.27-0.242zm-2.71 0.197c0.352-0.00562 0.703 0.00566 1.06 0.0449-3.07 0.511-8.85 0.362-12.1 0.779 2.24-0.724 5.1-0.462 7.79-0.584 1.11-0.0506 2.17-0.223 3.23-0.24zm2.69 0.412c0.51 0.0231 1.06 0.0862 1.48 0.0234-0.525 0.449-1.88 0.0678-2.72 0.193 0.262-0.224 0.732-0.24 1.24-0.217zm-1.44 0.0234c-0.187 0.396-0.748 0.42-1.36 0.389 0.23-0.354 1.02-0.152 1.36-0.389zm-2.47 0.115c0.257-0.0032 0.5 0.0189 0.721 0.0801-0.318-0.0882-1.42 0.159-1.95 0.193-5.7 0.379-12.3 0.975-18.9 0.975-1.77 0 0.453-0.189 0.584-0.195 0.204-0.00953 0.914 0.14 1.17-0.195 4.42-0.0906 10.9-0.347 16-0.584 0.738-0.0341 1.62-0.264 2.39-0.273zm-32.8 0.0137c0.34 0.03 0.424 0.179-0.375 0.26-0.343 0.0348-0.416 0.194-0.779 0.195 0.219-0.396 0.815-0.485 1.15-0.455zm22.4 0.0664c-0.632 0.167-5.16 0.677-6.23 0.193 2.29 0.151 4-0.286 6.23-0.193zm43.2 0.17c0.359 0.00626 0.0638 0.412-0.197 0.414-10.8 1.26-22.4 1.32-33.5 1.95-10.6 0.6-23.1 1.32-32.5 1.36 0.362-0.769 1.36-0.333 1.95-0.389 8.74-0.831 20.5-1.02 30.6-1.56 11-0.588 22.1-0.89 32.3-1.56 0.451-0.0293 0.463-0.287 1.17-0.195 0.08-0.0177 0.146-0.0243 0.197-0.0234zm-22.8 0.0234c-13.7 0.661-27.8 1.54-40.5 2.14-1.18 0.0565-2.63-0.367-3.11 0.389 12.8-0.768 28-1.25 41.1-2.14 0.742-0.0503 1.91 0.379 2.53-0.391zm-31.5 0.162c0.344-0.0091 0.67-2.82e-4 0.973 0.0332-0.834 0.514-2.73 0.512-4.28 0.584-1.88 0.0872-2.52 0.326-3.5 0 1.19 0.394 4.41-0.554 6.81-0.617zm2.51 0.00977c0.542 0.0232 1.12 0.0865 1.58 0.0234-0.589 0.449-2.01 0.0692-2.92 0.195 0.294-0.225 0.797-0.242 1.34-0.219zm-10.7 0.502c0.399-0.0211 0.74 0.0018 0.943 0.105-0.243-0.0584-3.59 0.696-4.09 0.195 0.214 0.214 1.95-0.237 3.15-0.301zm8.54 0.105c-0.775 0.782-2.9 0.218-4.28 0.389 1.14-0.413 3.05-0.0624 4.28-0.389zm-15 0.584c0.43 0.0241 0.0813 0.826 0.193 1.17-0.375-0.0791-0.152-0.755-0.777-0.584-0.142-0.64 0.386-0.298 0.389 0 0.228-0.0312 0.165-0.355 0.195-0.584zm0.664 1.27c0.322 0.0442 1.2 0.44 1.09 0.676-0.593 0.00882-0.757-0.411-1.17-0.584-0.0724-0.0824-0.0271-0.107 0.0801-0.0918zm-4.17 0.0918c-0.187 0.398-0.398 0.77-0.584 1.17-0.204-0.0298 0.0914-1.19 0.584-1.17zm56.7 0.193c0.592 2.99-0.396 6.09-0.389 9.15-1.23-0.33-0.278-1.72-0.195-2.73 0.183-2.23 0.136-4.58 0.584-6.43zm-54.9 0.391c0.37 0.446-0.396 1.41-0.391 2.14-0.512-0.506-0.154 0.315-0.779 0.195 0.299-0.869 0.827-1.51 1.17-2.34zm-0.779 0.193c-0.19 0.915-0.937 2.96-1.95 3.7 0.528-1.35 1.4-2.36 1.95-3.7zm38 0c0.43 0.0243 0.0831 0.827 0.195 1.17 0.203 2.01-0.387 4.24-0.391 6.62-0.24 0.0851-0.399 0.249-0.389 0.584-0.567-2.26 0.468-4.59 0.584-6.81v-1.56zm2.14 0c1.14 2.04 1.23 5.13 1.75 7.79 0.966-2.29 0.603-4.98 1.36-7.21 0.399 2.61-0.524 5.58-0.975 7.98-0.301-0.469-0.408-0.831-0.389 0.195-0.978-1.75-0.915-4.54-1.36-6.81-0.165 2.17 0.356 4.41 0.389 7.21-0.565-1.28 0.0139-0.138-0.777 0.193 0.0648-3.42-0.308-6.71 0-9.35zm-40.6 0.148c0.0756 0.03 0.0914 0.208-0.119 0.631-0.3-0.32-0.00686-0.681 0.119-0.631zm-1.87 0.0469c0.473 0.299-0.419 0.865-0.391 1.36-0.489-0.315 0.347-0.937 0.391-1.36zm56.3 0c0.958 1.9 0.462 5.25 0.584 7.98-0.466-1.56-0.72-4.9-0.584-7.98zm-29.2 0.195c0.261 0.128 0.362 0.417 0.389 0.779-0.586 0.196-0.345-0.434-0.389-0.779zm5.45 0c0.855 1.37 1.25 4.18 1.17 5.84 0.705-0.497 0.459-1.91 0.584-2.92 0.0922-0.739 0.207-1.43 0.389-1.95 0.568 2.38-0.791 5.36-0.389 7.59-0.62-1-0.133 0.546-0.584 0.779-0.977-0.454-0.148-1.37-0.195-2.14-0.137-2.22-0.812-4.91-0.973-7.2zm3.89 0c0.529 3.03-0.0947 5.54-0.584 7.98-0.421-2.5 0.585-5.26 0.584-7.98zm0.586 0c0.0518 0.802 0.45 0.271 0.434 0.0684-0.0111-0.0131-0.0379-0.0667-0.0449-0.0684 0.0303 0.00731 0.0422 0.0341 0.0449 0.0684 0.128 0.151 0.478 0.913 0.344 0.127 0.746 1.59 1.1 3.58 1.17 5.84 0.693-0.559 0.315 2.03 0.195 2.92-0.79-2.32-0.884-5-1.36-7.98-0.709 2.29 0.473 5.69-0.193 8.18-0.33 0.682-0.502-0.381-0.391-0.779-0.551-0.0319 0.162 1.2-0.389 1.17-1.07-0.886-0.108-2.18 0-3.31 0.185-1.94 0.0807-4.14 0.195-6.23zm4.67 0c0.32 1.91 0.709 5.21 0.584 7.79-0.011 0.225 0.0668 1.62-0.195 0.779v-1.17c-0.117-0.466-0.278-0.889-0.193-1.56-0.32 0.718-0.246 1.83-0.391 2.73-0.062 0.95-0.308-0.188-0.195-0.584 0.192-3.33 0.231-5.21 0.391-7.98zm10.7 0c0.328 0.924 0.416 3.92 0.389 5.84-0.331 0.0715-0.367-0.151-0.389-0.389-0.412 0.325-0.121 1.15-0.195 1.75-0.113 0.915 0.0233 0.718-0.391 1.17-0.227-1.85 0.225-5.5 0.586-8.37zm3.11 0c0.531 0.215 0.146 1.27 0.195 1.95 0.157 2.13 0.226 3.96-0.195 6.04-0.737-2.5 0.186-5.14 0-7.98zm15.9 0.0371c0.0984 0.0567 0.249 0.423 0.301 0.547 0.943 2.17 1.77 4.45 2.34 7.01-1.39-1.92-1.56-5.06-2.72-7.2-0.0114-0.309 0.0289-0.386 0.0879-0.352zm-0.992 0.0234c0.0777-0.0047 0.165 0.325 0.125 0.523-0.0744 0.371-0.248-0.125-0.195-0.391 0.0186-0.0926 0.0444-0.131 0.0703-0.133zm-19.3 0.135c0.439 0.145 0.0752 1.09 0.193 1.56 0.435-0.0837 0.0795-0.96 0.195-1.36 0.587 1.7-0.126 5.55-0.389 7.59-0.604-0.366 0.193-1.55-0.195-1.75-0.536 0.242 0.246 1.8-0.779 1.56 0.385-2.47 0.617-5.09 0.975-7.59zm-9.93 0.193c0.315 3.2-0.183 6.07-0.779 8.96-0.48-1.91 0.625-5.97 0.779-8.96zm12.1 0.195c0.357 0.888 0.353 4.16 0.584 6.04-0.107 0.432 0.373 0.277 0.391 0.584 0.145 0.882-0.536-0.298-0.391 0.584-0.0362 0.166 0.0766 0.184 0.195 0.195-0.334 0.569-0.483-0.666-0.389-1.17-0.581 0.133 0.334 1.76-0.779 1.36 1.03-2.03-0.268-5.19 0.389-7.59zm14.2 0c0.426 0.163 0.0859 2.77 0.195 3.89-0.426-0.163-0.0866-2.77-0.195-3.89zm2.14 0c0.307 1.41 0.818 4.47 0.584 6.81-0.49-2.05-0.685-4.73-0.584-6.81zm-65.8 0.195c0.486 0.719-0.835 1.07 0 0zm58.6 0c0.461 1.7 0.122 4.27 0 6.04-0.662-2.05-0.158-3.48 0-6.04zm5.65 0c0.843-0.00953 0.422 0.921 0.584 1.56 0.452 1.77 0.994 3.94 0.973 5.06-0.0205 1.13 0.0162-0.569-0.582-0.975v-1.36c-0.43 0.0243-0.0831 0.825-0.195 1.17v1.17c-0.319-1.69-0.394-3.63-0.389-5.65-0.498-0.0851-0.114-0.185-0.391-0.973zm-63 0.121c0.146 0.0205-0.262 1.27-0.488 1.44-0.365 0.489 0.116-0.187 0-0.584-0.173 0.0865-0.347 0.17-0.389 0.389-0.525 0.34 0.346-0.653 0.195-1.17 0.324 0.492 0.303 0.726 0.584 0 0.0457-0.0538 0.0767-0.0752 0.0977-0.0723zm30.3 0.0723c0.479 2.26-0.0221 6.24-0.389 8.37-0.676-2.82 0.412-5.56 0.389-8.37zm-4.74 0.0156c-0.0307-0.0235-0.0729 0.0222-0.129 0.18-0.0257 0.256-0.029 1.41 0.195 0.779 0.0191-0.192 0.0258-0.889-0.0664-0.959zm36.5 0.18c0.292 0.754 0.144 2.35 0 3.5-0.0806 0.647-0.0137 1.64-0.584 1.75 0.335-1.61 0.268-3.63 0.584-5.26zm-46.3 0.195c0.226-0.0317 0.346 0.0415 0.389 0.193-0.128 2.2 0.079 4.87-0.193 6.81-0.0989 0.707-0.352 1.57-0.779 1.95-0.129-3.31 0.533-5.83 0.584-8.96zm-6.81 0.389c0.661 1.07-0.717 3.58 0.195 4.28-0.512 0.0558-0.182 0.0788-0.391 1.17-0.201 1.05-0.864 2.23-0.777 3.31-0.439-0.368-0.253-1.53-0.195-2.14 0.197-2.11 0.925-4.68 1.17-6.62zm-10.3 0.195c0.0548 0.509-0.0709 0.836-0.584 0.777-0.0548-0.509 0.0709-0.836 0.584-0.777zm64.6 0.434c0.0921 0.12 0.0495 0.909 0.0508 1.12-0.268 0.62-0.193-0.687-0.195-0.973 0.0671-0.155 0.114-0.19 0.145-0.15zm-21.8 0.0664c-0.0764-0.111-0.101 0.675-0.0996 0.668-0.118 0.601-0.189 2.05 0 2.92 0.0567-1.17 0.281-2.42 0.193-3.12-0.0377-0.299-0.0683-0.436-0.0938-0.473zm10.9 0.0977c0.0922 0.0705 0.0855 0.767 0.0664 0.959-0.224 0.63-0.221-0.523-0.195-0.779 0.056-0.157 0.0982-0.203 0.129-0.18zm-10.1 0.375c-0.88 0.141-0.061 0.961 0 0zm-45.8 0.322c0.171-0.00179-0.398 0.995-0.512 1.24-0.391-0.157 0.35-0.803 0.391-1.17 0.0582-0.0461 0.0966-0.0662 0.121-0.0664zm57.6 0.104c-0.0266-0.0184-0.0563 0.021-0.0879 0.158-0.171 0.741 0.105 0.975 0.195 0.584 0.0232-0.102-0.0276-0.687-0.107-0.742zm-31.7 0.193c-0.0266-0.0184-0.0563 0.0229-0.0879 0.16-0.171 0.741 0.103 0.975 0.193 0.584 0.0234-0.102-0.0257-0.689-0.105-0.744zm40.8 0.354c0.45 0.653 0.069 2.14 0.195 3.12-0.701-0.332-0.425-2.48-0.195-3.12zm-35.8 0.258c-0.0259 0.00159-0.0517 0.0402-0.0703 0.133-0.0529 0.265 0.121 0.759 0.195 0.389 0.0399-0.199-0.0473-0.526-0.125-0.521zm-5.13 0.717c0.238 0.577-0.2 1.24 0.389 1.75-0.00691-0.707 0.11-1.54-0.389-1.75zm40.2 0.0957c0.193-0.0244 0.0431 1.29 0.0898 1.66-0.346 0.519-0.141-1.14-0.193-1.56 0.0431-0.0649 0.076-0.0961 0.104-0.0996zm-56.5 0.285c0.0922 0.0214 0.133 0.608 0.0859 0.787 0.741 0.313 0.33 2.17 0.389 3.5-0.263-0.473-0.389-0.12-0.584-0.389 0.273-0.306 0.525-0.62 0.391-1.75-0.422-0.0329-0.0888 0.69-0.195 0.973-0.409 0.142-0.0983-2.1-0.195-2.92 0.0408-0.155 0.0786-0.21 0.109-0.203zm36.5 0.00781c0.05 1.12-0.147 2.48 0.193 3.31-0.245-0.0427-0.397-0.303-0.389 0.195 1.31-0.13 0.192-3.43 0.195-3.51zm11.1 0v3.31c0.722-0.341 0.518-2.93 0-3.31zm-26.3 0.195c0.403 1.87-1.12 3.65 0 4.67 0.883-0.412 0.16-4.14 0-4.67zm27.6 0.404c0.0922 0.0704 0.0854 0.767 0.0664 0.959-0.224 0.63-0.222-0.524-0.195-0.779 0.056-0.158 0.0982-0.203 0.129-0.18zm-21.6 0.225c0.0921 0.12 0.0515 0.909 0.0527 1.12-0.268 0.62-0.193-0.687-0.195-0.973 0.0671-0.155 0.112-0.19 0.143-0.15zm-12.4 0.15c-0.192 1.5-0.0842 3.29-0.584 4.48 0.55-0.02 0.825-0.732 1.36-0.193 0.0415-1.87-0.0725-2.95-0.193-4.28h-0.584zm-4.09 0.193c0.121 1.47-0.151 3.33 0.584 3.89-0.254-0.969-0.313-3.53-0.584-3.89zm40 0.0352c0.0799 0.0553 0.129 0.643 0.105 0.744-0.0906 0.391-0.365 0.157-0.193-0.584 0.0316-0.137 0.0613-0.179 0.0879-0.16zm-37.9 0.744c0.0941 0.992-1.07 2.99 0 3.12-0.0255-0.362 0.162-3.34 0-3.12zm-7.59 0.584c-0.591 0.656 0.72 0.59 0 0zm11.3 0c-0.183 0.424-0.563 2.43 0.195 2.53 0.474-0.613 0.0678-2.01-0.195-2.53zm-2.79 0.0156c0.0923 0.0705 0.0857 0.767 0.0664 0.959-0.224 0.63-0.221-0.523-0.195-0.779 0.0561-0.157 0.0981-0.203 0.129-0.18zm-15.4 0.0625c0.186-0.073 0.0589 1.16 0.0938 1.48-0.326 0.561-0.153-0.995-0.193-1.36 0.0408-0.0701 0.073-0.107 0.0996-0.117zm23 0.193c0.186-0.0729 0.0591 1.16 0.0938 1.48-0.326 0.561-0.155-0.995-0.195-1.36 0.0408-0.0701 0.075-0.107 0.102-0.117zm5.94 0.312c0.434 0.588 0.153 0.87-0.391 0.391 0.0486-0.211 0.388-0.133 0.391-0.391zm-17.4 1.01c0.0799 0.0552 0.131 0.64 0.107 0.742-0.0903 0.391-0.367 0.157-0.195-0.584 0.0316-0.137 0.0613-0.177 0.0879-0.158zm-13.3 0.221c0.0775-0.00467 0.165 0.323 0.125 0.521-0.0741 0.37-0.247-0.124-0.193-0.389 0.0185-0.0926 0.0425-0.131 0.0684-0.133z"/> + <path d="m79.9 36.4c-19.9 1.41-40.7 1.8-60.5 2.92-1.51 0.0851-2.62 0.175-4.09 0.584 4.47-0.0684 8.39-0.783 12.8-0.389-2.32 0.0481-5.06 0.252-7.4 0.389-0.777 0.0455-1.97-0.37-2.53 0.389 2.05-0.168 4.91-0.411 6.23-0.193-2.67 0.506-6.27 0.091-8.76 0.779 7.5-0.223 16.1-0.736 24.3-1.17 1.11-0.0582 2.35-0.33 3.31 0.195-8.59 0.709-19.1 0.225-26.5 1.36-0.195 0.0303-1.32-0.364-1.17 0.389 1.5-0.153 3.37-0.675 4.87-0.389 0.396 0.076 0.204 0.202 0.193 0.193 0.731 0.597 2.37 0.0448 3.31 0 16.4-0.78 34.7-1.78 51.2-2.14-9.95 0.303-20.2 1.09-30.2 1.56-8.88 0.413-19.5 0.534-27.5 1.36-0.489 0.0512-1.51-0.369-1.75 0.391 1.1-0.236 2.64-0.317 4.09-0.391 1.43-0.0727 3.07-0.374 3.89 0.195-2.66 0.0679-5.57-0.123-7.59 0.584 4.71-0.289 10.2-0.791 14.8-0.584-4.98 0.568-9.86 0.187-14.4 0.973 23-1.06 45-2.1 67.8-3.31 0.215-0.0882 0.139-0.149-0.195-0.195-1.92-0.266-4.35 0.272-6.62 0.389-14.2 0.73-28.3 1.32-43.8 1.95 16.3-1.06 33.4-1.76 49.3-2.53 0.888-0.295-1.74-0.205-2.34-0.193-14.6 0.28-30.3 1.97-45.2 2.14 15.6-1.21 31.9-1.41 48.1-2.72-2.13-0.178-4.33 0.601-6.23 0 1.88-0.0894 2.76-0.202 4.48-0.195 0.205-0.0107 0.152-0.25-0.195-0.195-11.6 0.285-22.9 1.34-33.7 1.36 11.3-0.811 23.4-0.889 34.7-1.95-13.4 0.373-25.8 1.59-38.9 1.56 10.8-0.671 23.3-0.848 35-1.75 0.75-0.0577 1.78 0.383 2.34-0.391-15.8 0.781-32.7 2.04-48.9 2.14 11.1-0.495 23.9-1.02 35.2-1.56 0.642-0.0303 5.7-0.253 1.95-0.389-3.24-0.117-6.7 0.245-10.1 0.389-10.4 0.438-20.7 1.36-31 1.36 14-0.623 30.4-1.66 45-2.34 2.19-0.102 4.53 0.158 6.62-0.584zm0.195 0.584c-1.68 0.0977-4.43 0.28-5.84 0.389-1.13 0.087-2.71-0.362-2.92 0.391 2.69-0.485 6.35-0.0107 8.76-0.779zm-2.92 1.95c-0.0684 0.646-1.12 0.305-1.75 0.389 0.347-0.653 0.891-0.1 1.75-0.389zm-45.2 0.584c-0.249 0.854-2.04 0.169-2.92 0.389-0.482 0.0245-0.331-0.239 0-0.195 1.01-0.0329 2.2 0.122 2.92-0.193zm-4.22 0.223c0.195-0.00524 0.339 0.0777 0.332 0.361-0.869 0.0396-4.9 0.00322-2.53-0.195h1.56c0.2-0.0679 0.448-0.161 0.643-0.166zm19.6 0.125c0.0961 2.66e-4 0.221 0.0129 0.377 0.041-0.374 0.52-1.05-0.0429-0.377-0.041zm-0.598 0.041c-2.45 0.735-5.11 0.454-7.79 0.584-5.46 0.264-11.8 0.69-17.5 0.973 0.288-0.744 1.26-0.373 1.75-0.389 7.78-0.254 16-0.777 23.6-1.17zm-29.4 0.154c-0.531-0.0172-2.01 0.144-1.97 0.43 0.887 0.071 0.892-0.117 1.36-0.195 0.9-0.15 0.924-0.224 0.605-0.234zm66.6 0.408c-0.246-0.0191-0.453 0.00183-0.465 0.217 0.342-0.112 1.15 0.235 1.17-0.195-0.171 0.0559-0.459-0.00239-0.705-0.0215zm-0.854 0.0215c-16.9 0.814-33.3 1.64-49.8 2.34-5.47 0.23-11 0.418-16.4 1.17 1.31 1.35 2.59 1.89 4.87 1.17-0.467-0.259-2.58 0.347-3.31-0.193 20.8-1.48 42.2-1.67 62.5-3.12 1.16-0.0825 6.24-0.251 3.7-0.391-5.06-0.279-12.2 0.6-15.8 0.779-17.1 0.867-34.5 1.61-50.8 2.34 11.4-0.948 25.3-1.37 37.6-1.95 8.37-0.389 18.7-0.712 26.5-1.36 0.356-0.0298 2.17-0.0454 0.389-0.195-3.46-0.29-6.15 0.662-8.76 0.195 3.12-0.529 6.26-0.0572 9.35-0.779zm1.15 0.367c-0.28-0.0203-0.521-9.15e-4 -0.562 0.217 0.403-0.116 1.28 0.242 1.36-0.193-0.202 0.0578-0.521-0.00318-0.801-0.0234zm-10.7 0.412c-18.8 1.35-37.8 1.88-56.1 2.92 4.47-0.677 8.87-0.789 13.2-0.973 15.4-0.65 28.1-1.36 42.8-1.95zm-54.1 0.193c-1.1 0.199-3.5 0.132-3.7 0.584 0.954-0.474 3.13 0.275 3.7-0.584zm15.4 0.00195h0.391c-0.454 0.757-1.71 0.341-2.53 0.389-2.35 0.136-6.28 0.637-8.76 0.195 3.4-0.186 7.18-0.251 10.9-0.584zm49 0.566c-0.379-0.0104-0.822-0.00661-1.27 0.0176-1.33 0.356-3.41-0.0413-4.67 0.389 1.19 0.17 4.47-0.164 6.04-0.195 1.6-0.032 1.04-0.18-0.0918-0.211zm-51.5 0.211c0.465-0.0093 0.725 0.0072 1.17 0-1.12 0.467-3.62 0.0491-1.17 0zm53.7 0.195c-22 0.986-42.4 2.08-62.5 3.12-1.63 0.0839-3.32-0.243-4.09 0.584 22.8-1.67 45.1-2.04 66.6-3.7zm-13.2 0.195c-7.79 0.371-15.8 0.698-24.9 1.17-6.32 0.324-12.7 0.682-18.9 0.973-1.57 0.0744-5.98 0.429-6.04 0.391 0.527 0.382 4.99-0.0833 7.01-0.195 13.4-0.743 28.6-1.56 40.3-2.14 1.74-0.086 5.59 0.342 7.2-0.193-0.248 0.0822-4.51-0.0076-4.67 0zm12.9 0.287c-0.248 0.017-0.562 0.1-0.781 0.102-1.14 0.0067-2.26-0.0138-2.53 0-10.5 0.531-21 0.993-31.2 1.56-9.62 0.532-20.2 0.809-29.8 1.56-0.412 0.0324-1.13-0.0174-1.36 0.389 0.452 0.322 0.909 0.0596 1.36 0 2.12-0.278 4.05-0.297 6.23-0.389 19.9-0.837 39.1-2.24 58.4-2.92 0.0428-0.264-0.139-0.314-0.387-0.297zm1.36 0.49c-10.3 0.415-20.8 1.11-31.3 1.56-8.69 0.37-18.9 0.594-27.1 1.36-0.216 0.0203-1.27-0.383-1.17 0.389 9.29-0.567 19.2-1.05 28.8-1.56 10.1-0.531 20.7-0.761 30.8-1.75zm-0.162 0.574c-0.358-9.23e-4 -0.76 0.163-1.01 0.205-2.69 0.452-5.67 0.263-8.56 0.391-8.43 0.37-16.3 0.997-24.1 1.36-9.55 0.444-19.6 0.641-28.4 1.36-1.21 0.0989-3.41-0.292-3.7 0.973 15.7-0.222 34.1-1.56 50.4-2.34 4.7-0.224 10.7-0.384 14.6-0.777 0.42-0.0424 1.41 0.433 1.56-0.391-0.0733-0.622-0.388-0.788-0.746-0.789zm-3.34 0.594c0.174 0.12 0.349 0.235 0.391 0 0.799 0.213-0.696 0.815-0.391 0zm2.14 0c-0.0598 0.264-0.488 0.162-0.779 0.195 0.0601-0.264 0.488-0.162 0.779-0.195zm-58.8 1.75c-0.784 0.143-2.45 0.127-3.31 0.193-0.481 0.0374-3.4 0.241-1.95 0.391 0.885 0.148 1.76-0.129 2.73-0.195 0.898-0.0612 2.08 0.376 2.53-0.389zm60 0.348c-1.44-0.00563-0.719 0.546 0.752 0.041-0.297-0.0283-0.547-0.0402-0.752-0.041zm-3.85 0.211c-0.575-0.0234-1.11-0.00623-1.44 0.219 0.977-0.126 2.46 0.257 3.12-0.193-0.489 0.0632-1.1-0.00204-1.68-0.0254zm-2.72 0.182c-1.32-6.1e-4 -0.521 0.546 0.699 0.0371-0.279-0.0272-0.511-0.037-0.699-0.0371zm-2.05 0.0156c-0.346-0.0217-0.652-0.00413-0.756 0.217 0.528-0.121 1.55 0.246 1.75-0.195-0.264 0.0602-0.652 1.73e-4 -0.998-0.0215zm6.95 0.178c-0.605-0.0175-1.25 0.00975-1.86 0.0391-1.06 0.0503-2.49 0.0733-3.11 0.584 2.07-0.271 4.3-0.367 6.62-0.389-0.469-0.154-1.04-0.217-1.64-0.234zm-10.1 0.0332c-3.3 0.0494-6.21 0.488-9.66 0.395 2.62 0.431 8.68-0.14 11.1-0.389-0.49-0.0116-0.97-0.0129-1.44-0.00586zm11.6 0.348c-0.181 0.019-0.406 0.202-0.428 0.242-0.184 0.342 0.718-0.0609 0.584-0.195-0.0413-0.0413-0.096-0.0532-0.156-0.0469zm-8.21 0.0469c-2.93 0.0586 0.269 0.461 1.56 0-0.498 0.0076-1.04-0.0102-1.56 0zm7.4 0c-2.36 0.431-5.39 0.188-7.79 0.584h-1.36c-0.422-0.141-1.24 0.2-0.389 0.195h1.36c1.94-0.00381 4.3-0.299 5.84-0.389 0.846-0.0496 2.73 0.329 2.34-0.391zm-24.1 0.188c-0.666 0.0157-1.27 0.0986-1.65 0.396 1.16 0.0212 1.45-0.0766 2.14-0.193 1.01 0.108 4.07-0.127 1.56-0.195-0.653 0.0287-1.39-0.0235-2.05-0.00781zm10.6 0.00781c-4.05 0.468-8.67 0.331-11.9 0.779 1.15 0.35 4.62-0.473 5.45 0.193-15.3 0.466-30.2 1.8-44.8 1.95 1.38-0.375 3.47-0.0314 4.87-0.389 1.87 0.258 2.37-0.379 2.92-0.195-1.76-0.584-6.79 0.855-7.98 0 1.36 0.0138-0.521-0.313 0.195-0.779-0.802 0.251-0.228 2.23 0.193 2.53 5.02-0.625 11.8-0.696 17.9-0.973 15.4-0.699 28.6-1.68 43.6-2.34 1.13-0.0498 2.45 0.362 3.5-0.391-5.1 0.45-11.1 0.536-16.2 0.779-1.53 0.0739-4.26 0.562-5.06 0 3.58-0.115 7.16-0.234 10.5-0.584-2.5-0.279-7.29 0.663-9.54 0 0.957 0.176 2.16-0.151 3.12-0.195 1.11-0.0517 2.45 0.367 3.11-0.389zm1.75 0c-3.3 0.0529 0.287 0.466 1.75 0-0.579 0.0074-1.14-0.0098-1.75 0zm-15.5 0.146c-0.533 0-1.07 0.0806-1.27 0.242 0.591 0.216 1.48-0.124 2.53 0-0.2-0.162-0.733-0.242-1.27-0.242zm-2.71 0.197c-1.06 0.0169-2.11 0.19-3.23 0.24-2.68 0.122-5.55-0.14-7.79 0.584 3.22-0.417 9-0.268 12.1-0.779-0.353-0.0393-0.705-0.0505-1.06-0.0449zm2.69 0.412c-0.51-0.0231-0.978-0.00747-1.24 0.217 0.848-0.126 2.2 0.255 2.72-0.193-0.424 0.0628-0.974-3.49e-4 -1.48-0.0234zm-1.44 0.0234c-0.347 0.236-1.13 0.0348-1.36 0.389 0.615 0.0312 1.18 0.00767 1.36-0.389zm-2.47 0.115c-0.77 0.00955-1.65 0.239-2.39 0.273-5.1 0.237-11.5 0.493-16 0.584-0.254 0.336-0.964 0.186-1.17 0.195-0.131 0.00596-2.35 0.195-0.584 0.195 6.63 0 13.2-0.596 18.9-0.975 0.526-0.0348 1.63-0.282 1.95-0.193-0.221-0.0612-0.466-0.0833-0.723-0.0801zm-32.8 0.0137c-0.34-0.03-0.935 0.0593-1.15 0.455 0.363-0.00143 0.436-0.161 0.779-0.195 0.799-0.081 0.715-0.23 0.375-0.26zm22.4 0.0645c-2.23-0.0922-3.94 0.347-6.23 0.195 1.07 0.484 5.6-0.0285 6.23-0.195zm43.2 0.172c-0.0514-8.94e-4 -0.115 0.00571-0.195 0.0234-0.705-0.0915-0.717 0.166-1.17 0.195-10.2 0.667-21.3 0.969-32.3 1.56-10.1 0.54-21.8 0.728-30.6 1.56-0.587 0.056-1.59-0.38-1.95 0.389 9.42-0.0427 21.9-0.763 32.5-1.36 11.1-0.626 22.7-0.69 33.5-1.95 0.261-0.00209 0.555-0.408 0.195-0.414zm-22.8 0.0234c-0.624 0.77-1.79 0.34-2.53 0.391-13.1 0.889-28.3 1.37-41.1 2.14 0.483-0.755 1.93-0.332 3.12-0.389 12.7-0.606 26.8-1.48 40.5-2.14zm-31.5 0.162c-2.41 0.0637-5.63 1.01-6.81 0.617 0.981 0.326 1.63 0.0872 3.5 0 1.55-0.0725 3.45-0.0697 4.28-0.584-0.303-0.0335-0.629-0.0423-0.973-0.0332zm2.51 0.00977c-0.542-0.0232-1.05-0.00587-1.34 0.219 0.912-0.126 2.33 0.254 2.92-0.195-0.456 0.063-1.04-2.01e-4 -1.58-0.0234zm-10.7 0.502c-1.2 0.0633-2.93 0.515-3.15 0.301 0.501 0.501 3.85-0.254 4.09-0.195-0.204-0.104-0.544-0.127-0.943-0.105zm8.54 0.105c-1.23 0.326-3.14-0.0239-4.28 0.389 1.39-0.171 3.51 0.394 4.28-0.389zm-15 0.584c-0.0303 0.229 0.033 0.553-0.195 0.584-0.00238-0.298-0.53-0.64-0.389 0 0.625-0.171 0.402 0.505 0.777 0.584-0.112-0.342 0.237-1.14-0.193-1.17zm0.664 1.27c-0.107-0.0147-0.152 0.00946-0.0801 0.0918 0.411 0.173 0.575 0.593 1.17 0.584 0.108-0.236-0.766-0.632-1.09-0.676zm21.5 1.26c0.0441 0.345-0.197 0.975 0.389 0.779-0.0267-0.363-0.128-0.651-0.389-0.779zm43.3 0.0625c-0.0259 0.00159-0.0517 0.0402-0.0703 0.133-0.0527 0.265 0.121 0.759 0.195 0.389 0.04-0.199-0.0473-0.526-0.125-0.521z" fill="#cbc1b6"/> +</svg> diff --git a/scour/__init__.py b/scour/__init__.py index f3f6b3e..591803a 100644 --- a/scour/__init__.py +++ b/scour/__init__.py @@ -1,22 +1,19 @@ ############################################################################### -## -## Copyright (C) 2013 Tavendo GmbH -## -## Licensed under the Apache License, Version 2.0 (the "License"); -## you may not use this file except in compliance with the License. -## You may obtain a copy of the License at -## -## http://www.apache.org/licenses/LICENSE-2.0 -## -## Unless required by applicable law or agreed to in writing, software -## distributed under the License is distributed on an "AS IS" BASIS, -## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -## See the License for the specific language governing permissions and -## limitations under the License. -## +# +# Copyright (C) 2010 Jeff Schiller, 2010 Louis Simard, 2013-2015 Tavendo GmbH +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ############################################################################### -import scour -import svg_regex -import svg_transform -import yocto_css +__version__ = u'0.38.2' diff --git a/scour/scour.py b/scour/scour.py index 900070c..8c907de 100644 --- a/scour/scour.py +++ b/scour/scour.py @@ -5,6 +5,7 @@ # # Copyright 2010 Jeff Schiller # Copyright 2010 Louis Simard +# Copyright 2013-2014 Tavendo GmbH # # This file is part of Scour, http://www.codedread.com/scour/ # @@ -31,8 +32,8 @@ # * Collapse all group based transformations # Even more ideas here: http://esw.w3.org/topic/SvgTidy -# * analysis of path elements to see if rect can be used instead? (must also need to look -# at rounded corners) +# * analysis of path elements to see if rect can be used instead? +# (must also need to look at rounded corners) # Next Up: # - why are marker-start, -end not removed from the style attribute? @@ -43,37 +44,47 @@ # - parse transform attribute # - if a <g> has only one element in it, collapse the <g> (ensure transform, etc are carried down) -# necessary to get true division -from __future__ import division -import os -import sys -import xml.dom.minidom -import re +from __future__ import division # use "true" division instead of integer division in Python 2 (see PEP 238) +from __future__ import print_function # use print() as a function in Python 2 (see PEP 3105) +from __future__ import absolute_import # use absolute imports by default in Python 2 (see PEP 328) + import math -from svg_regex import svg_parser -from svg_transform import svg_transform_parser import optparse -from yocto_css import parseCssString +import os +import re +import sys +import time +import xml.dom.minidom +from xml.dom import Node, NotFoundErr +from collections import namedtuple, defaultdict +from decimal import Context, Decimal, InvalidOperation, getcontext -# Python 2.3- did not have Decimal -try: - from decimal import * -except ImportError: - print >>sys.stderr, "Scour requires Python 2.4." +import six +from six.moves import range, urllib -# Import Psyco if available -try: - import psyco - psyco.full() -except ImportError: - pass +from scour.stats import ScourStats +from scour.svg_regex import svg_parser +from scour.svg_transform import svg_transform_parser +from scour.yocto_css import parseCssString +from scour import __version__ -APP = 'scour' -VER = '0.27' -COPYRIGHT = 'Copyright Jeff Schiller, Louis Simard, 2010' -NS = { 'SVG': 'http://www.w3.org/2000/svg', +APP = u'scour' +VER = __version__ +COPYRIGHT = u'Copyright Jeff Schiller, Louis Simard, 2010' + + +XML_ENTS_NO_QUOTES = {'<': '<', '>': '>', '&': '&'} +XML_ENTS_ESCAPE_APOS = XML_ENTS_NO_QUOTES.copy() +XML_ENTS_ESCAPE_APOS["'"] = ''' +XML_ENTS_ESCAPE_QUOT = XML_ENTS_NO_QUOTES.copy() +XML_ENTS_ESCAPE_QUOT['"'] = '"' + +# Used to split values where "x y" or "x,y" or a mix of the two is allowed +RE_COMMA_WSP = re.compile(r"\s*[\s,]\s*") + +NS = {'SVG': 'http://www.w3.org/2000/svg', 'XLINK': 'http://www.w3.org/1999/xlink', 'SODIPODI': 'http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd', 'INKSCAPE': 'http://www.inkscape.org/namespaces/inkscape', @@ -86,3192 +97,4103 @@ NS = { 'SVG': 'http://www.w3.org/2000/svg', 'ADOBE_FLOWS': 'http://ns.adobe.com/Flows/1.0/', 'ADOBE_IMAGE_REPLACEMENT': 'http://ns.adobe.com/ImageReplacement/1.0/', 'ADOBE_CUSTOM': 'http://ns.adobe.com/GenericCustomNamespace/1.0/', - 'ADOBE_XPATH': 'http://ns.adobe.com/XPath/1.0/' + 'ADOBE_XPATH': 'http://ns.adobe.com/XPath/1.0/', + 'SKETCH': 'http://www.bohemiancoding.com/sketch/ns' } -unwanted_ns = [ NS['SODIPODI'], NS['INKSCAPE'], NS['ADOBE_ILLUSTRATOR'], - NS['ADOBE_GRAPHS'], NS['ADOBE_SVG_VIEWER'], NS['ADOBE_VARIABLES'], - NS['ADOBE_SFW'], NS['ADOBE_EXTENSIBILITY'], NS['ADOBE_FLOWS'], - NS['ADOBE_IMAGE_REPLACEMENT'], NS['ADOBE_CUSTOM'], NS['ADOBE_XPATH'] ] +unwanted_ns = [NS['SODIPODI'], NS['INKSCAPE'], NS['ADOBE_ILLUSTRATOR'], + NS['ADOBE_GRAPHS'], NS['ADOBE_SVG_VIEWER'], NS['ADOBE_VARIABLES'], + NS['ADOBE_SFW'], NS['ADOBE_EXTENSIBILITY'], NS['ADOBE_FLOWS'], + NS['ADOBE_IMAGE_REPLACEMENT'], NS['ADOBE_CUSTOM'], + NS['ADOBE_XPATH'], NS['SKETCH']] +# A list of all SVG presentation properties +# +# Sources for this list: +# https://www.w3.org/TR/SVG/propidx.html (implemented) +# https://www.w3.org/TR/SVGTiny12/attributeTable.html (implemented) +# https://www.w3.org/TR/SVG2/propidx.html (not yet implemented) +# svgAttributes = [ - 'clip-rule', - 'display', - 'fill', - 'fill-opacity', - 'fill-rule', - 'filter', - 'font-family', - 'font-size', - 'font-stretch', - 'font-style', - 'font-variant', - 'font-weight', - 'line-height', - 'marker', - 'marker-end', - 'marker-mid', - 'marker-start', - 'opacity', - 'overflow', - 'stop-color', - 'stop-opacity', - 'stroke', - 'stroke-dasharray', - 'stroke-dashoffset', - 'stroke-linecap', - 'stroke-linejoin', - 'stroke-miterlimit', - 'stroke-opacity', - 'stroke-width', - 'visibility' - ] + # SVG 1.1 + 'alignment-baseline', + 'baseline-shift', + 'clip', + 'clip-path', + 'clip-rule', + 'color', + 'color-interpolation', + 'color-interpolation-filters', + 'color-profile', + 'color-rendering', + 'cursor', + 'direction', + 'display', + 'dominant-baseline', + 'enable-background', + 'fill', + 'fill-opacity', + 'fill-rule', + 'filter', + 'flood-color', + 'flood-opacity', + 'font', + 'font-family', + 'font-size', + 'font-size-adjust', + 'font-stretch', + 'font-style', + 'font-variant', + 'font-weight', + 'glyph-orientation-horizontal', + 'glyph-orientation-vertical', + 'image-rendering', + 'kerning', + 'letter-spacing', + 'lighting-color', + 'marker', + 'marker-end', + 'marker-mid', + 'marker-start', + 'mask', + 'opacity', + 'overflow', + 'pointer-events', + 'shape-rendering', + 'stop-color', + 'stop-opacity', + 'stroke', + 'stroke-dasharray', + 'stroke-dashoffset', + 'stroke-linecap', + 'stroke-linejoin', + 'stroke-miterlimit', + 'stroke-opacity', + 'stroke-width', + 'text-anchor', + 'text-decoration', + 'text-rendering', + 'unicode-bidi', + 'visibility', + 'word-spacing', + 'writing-mode', + # SVG 1.2 Tiny + 'audio-level', + 'buffered-rendering', + 'display-align', + 'line-increment', + 'solid-color', + 'solid-opacity', + 'text-align', + 'vector-effect', + 'viewport-fill', + 'viewport-fill-opacity', +] colors = { - 'aliceblue': 'rgb(240, 248, 255)', - 'antiquewhite': 'rgb(250, 235, 215)', - 'aqua': 'rgb( 0, 255, 255)', - 'aquamarine': 'rgb(127, 255, 212)', - 'azure': 'rgb(240, 255, 255)', - 'beige': 'rgb(245, 245, 220)', - 'bisque': 'rgb(255, 228, 196)', - 'black': 'rgb( 0, 0, 0)', - 'blanchedalmond': 'rgb(255, 235, 205)', - 'blue': 'rgb( 0, 0, 255)', - 'blueviolet': 'rgb(138, 43, 226)', - 'brown': 'rgb(165, 42, 42)', - 'burlywood': 'rgb(222, 184, 135)', - 'cadetblue': 'rgb( 95, 158, 160)', - 'chartreuse': 'rgb(127, 255, 0)', - 'chocolate': 'rgb(210, 105, 30)', - 'coral': 'rgb(255, 127, 80)', - 'cornflowerblue': 'rgb(100, 149, 237)', - 'cornsilk': 'rgb(255, 248, 220)', - 'crimson': 'rgb(220, 20, 60)', - 'cyan': 'rgb( 0, 255, 255)', - 'darkblue': 'rgb( 0, 0, 139)', - 'darkcyan': 'rgb( 0, 139, 139)', - 'darkgoldenrod': 'rgb(184, 134, 11)', - 'darkgray': 'rgb(169, 169, 169)', - 'darkgreen': 'rgb( 0, 100, 0)', - 'darkgrey': 'rgb(169, 169, 169)', - 'darkkhaki': 'rgb(189, 183, 107)', - 'darkmagenta': 'rgb(139, 0, 139)', - 'darkolivegreen': 'rgb( 85, 107, 47)', - 'darkorange': 'rgb(255, 140, 0)', - 'darkorchid': 'rgb(153, 50, 204)', - 'darkred': 'rgb(139, 0, 0)', - 'darksalmon': 'rgb(233, 150, 122)', - 'darkseagreen': 'rgb(143, 188, 143)', - 'darkslateblue': 'rgb( 72, 61, 139)', - 'darkslategray': 'rgb( 47, 79, 79)', - 'darkslategrey': 'rgb( 47, 79, 79)', - 'darkturquoise': 'rgb( 0, 206, 209)', - 'darkviolet': 'rgb(148, 0, 211)', - 'deeppink': 'rgb(255, 20, 147)', - 'deepskyblue': 'rgb( 0, 191, 255)', - 'dimgray': 'rgb(105, 105, 105)', - 'dimgrey': 'rgb(105, 105, 105)', - 'dodgerblue': 'rgb( 30, 144, 255)', - 'firebrick': 'rgb(178, 34, 34)', - 'floralwhite': 'rgb(255, 250, 240)', - 'forestgreen': 'rgb( 34, 139, 34)', - 'fuchsia': 'rgb(255, 0, 255)', - 'gainsboro': 'rgb(220, 220, 220)', - 'ghostwhite': 'rgb(248, 248, 255)', - 'gold': 'rgb(255, 215, 0)', - 'goldenrod': 'rgb(218, 165, 32)', - 'gray': 'rgb(128, 128, 128)', - 'grey': 'rgb(128, 128, 128)', - 'green': 'rgb( 0, 128, 0)', - 'greenyellow': 'rgb(173, 255, 47)', - 'honeydew': 'rgb(240, 255, 240)', - 'hotpink': 'rgb(255, 105, 180)', - 'indianred': 'rgb(205, 92, 92)', - 'indigo': 'rgb( 75, 0, 130)', - 'ivory': 'rgb(255, 255, 240)', - 'khaki': 'rgb(240, 230, 140)', - 'lavender': 'rgb(230, 230, 250)', - 'lavenderblush': 'rgb(255, 240, 245)', - 'lawngreen': 'rgb(124, 252, 0)', - 'lemonchiffon': 'rgb(255, 250, 205)', - 'lightblue': 'rgb(173, 216, 230)', - 'lightcoral': 'rgb(240, 128, 128)', - 'lightcyan': 'rgb(224, 255, 255)', - 'lightgoldenrodyellow': 'rgb(250, 250, 210)', - 'lightgray': 'rgb(211, 211, 211)', - 'lightgreen': 'rgb(144, 238, 144)', - 'lightgrey': 'rgb(211, 211, 211)', - 'lightpink': 'rgb(255, 182, 193)', - 'lightsalmon': 'rgb(255, 160, 122)', - 'lightseagreen': 'rgb( 32, 178, 170)', - 'lightskyblue': 'rgb(135, 206, 250)', - 'lightslategray': 'rgb(119, 136, 153)', - 'lightslategrey': 'rgb(119, 136, 153)', - 'lightsteelblue': 'rgb(176, 196, 222)', - 'lightyellow': 'rgb(255, 255, 224)', - 'lime': 'rgb( 0, 255, 0)', - 'limegreen': 'rgb( 50, 205, 50)', - 'linen': 'rgb(250, 240, 230)', - 'magenta': 'rgb(255, 0, 255)', - 'maroon': 'rgb(128, 0, 0)', - 'mediumaquamarine': 'rgb(102, 205, 170)', - 'mediumblue': 'rgb( 0, 0, 205)', - 'mediumorchid': 'rgb(186, 85, 211)', - 'mediumpurple': 'rgb(147, 112, 219)', - 'mediumseagreen': 'rgb( 60, 179, 113)', - 'mediumslateblue': 'rgb(123, 104, 238)', - 'mediumspringgreen': 'rgb( 0, 250, 154)', - 'mediumturquoise': 'rgb( 72, 209, 204)', - 'mediumvioletred': 'rgb(199, 21, 133)', - 'midnightblue': 'rgb( 25, 25, 112)', - 'mintcream': 'rgb(245, 255, 250)', - 'mistyrose': 'rgb(255, 228, 225)', - 'moccasin': 'rgb(255, 228, 181)', - 'navajowhite': 'rgb(255, 222, 173)', - 'navy': 'rgb( 0, 0, 128)', - 'oldlace': 'rgb(253, 245, 230)', - 'olive': 'rgb(128, 128, 0)', - 'olivedrab': 'rgb(107, 142, 35)', - 'orange': 'rgb(255, 165, 0)', - 'orangered': 'rgb(255, 69, 0)', - 'orchid': 'rgb(218, 112, 214)', - 'palegoldenrod': 'rgb(238, 232, 170)', - 'palegreen': 'rgb(152, 251, 152)', - 'paleturquoise': 'rgb(175, 238, 238)', - 'palevioletred': 'rgb(219, 112, 147)', - 'papayawhip': 'rgb(255, 239, 213)', - 'peachpuff': 'rgb(255, 218, 185)', - 'peru': 'rgb(205, 133, 63)', - 'pink': 'rgb(255, 192, 203)', - 'plum': 'rgb(221, 160, 221)', - 'powderblue': 'rgb(176, 224, 230)', - 'purple': 'rgb(128, 0, 128)', - 'red': 'rgb(255, 0, 0)', - 'rosybrown': 'rgb(188, 143, 143)', - 'royalblue': 'rgb( 65, 105, 225)', - 'saddlebrown': 'rgb(139, 69, 19)', - 'salmon': 'rgb(250, 128, 114)', - 'sandybrown': 'rgb(244, 164, 96)', - 'seagreen': 'rgb( 46, 139, 87)', - 'seashell': 'rgb(255, 245, 238)', - 'sienna': 'rgb(160, 82, 45)', - 'silver': 'rgb(192, 192, 192)', - 'skyblue': 'rgb(135, 206, 235)', - 'slateblue': 'rgb(106, 90, 205)', - 'slategray': 'rgb(112, 128, 144)', - 'slategrey': 'rgb(112, 128, 144)', - 'snow': 'rgb(255, 250, 250)', - 'springgreen': 'rgb( 0, 255, 127)', - 'steelblue': 'rgb( 70, 130, 180)', - 'tan': 'rgb(210, 180, 140)', - 'teal': 'rgb( 0, 128, 128)', - 'thistle': 'rgb(216, 191, 216)', - 'tomato': 'rgb(255, 99, 71)', - 'turquoise': 'rgb( 64, 224, 208)', - 'violet': 'rgb(238, 130, 238)', - 'wheat': 'rgb(245, 222, 179)', - 'white': 'rgb(255, 255, 255)', - 'whitesmoke': 'rgb(245, 245, 245)', - 'yellow': 'rgb(255, 255, 0)', - 'yellowgreen': 'rgb(154, 205, 50)', - } + 'aliceblue': 'rgb(240, 248, 255)', + 'antiquewhite': 'rgb(250, 235, 215)', + 'aqua': 'rgb( 0, 255, 255)', + 'aquamarine': 'rgb(127, 255, 212)', + 'azure': 'rgb(240, 255, 255)', + 'beige': 'rgb(245, 245, 220)', + 'bisque': 'rgb(255, 228, 196)', + 'black': 'rgb( 0, 0, 0)', + 'blanchedalmond': 'rgb(255, 235, 205)', + 'blue': 'rgb( 0, 0, 255)', + 'blueviolet': 'rgb(138, 43, 226)', + 'brown': 'rgb(165, 42, 42)', + 'burlywood': 'rgb(222, 184, 135)', + 'cadetblue': 'rgb( 95, 158, 160)', + 'chartreuse': 'rgb(127, 255, 0)', + 'chocolate': 'rgb(210, 105, 30)', + 'coral': 'rgb(255, 127, 80)', + 'cornflowerblue': 'rgb(100, 149, 237)', + 'cornsilk': 'rgb(255, 248, 220)', + 'crimson': 'rgb(220, 20, 60)', + 'cyan': 'rgb( 0, 255, 255)', + 'darkblue': 'rgb( 0, 0, 139)', + 'darkcyan': 'rgb( 0, 139, 139)', + 'darkgoldenrod': 'rgb(184, 134, 11)', + 'darkgray': 'rgb(169, 169, 169)', + 'darkgreen': 'rgb( 0, 100, 0)', + 'darkgrey': 'rgb(169, 169, 169)', + 'darkkhaki': 'rgb(189, 183, 107)', + 'darkmagenta': 'rgb(139, 0, 139)', + 'darkolivegreen': 'rgb( 85, 107, 47)', + 'darkorange': 'rgb(255, 140, 0)', + 'darkorchid': 'rgb(153, 50, 204)', + 'darkred': 'rgb(139, 0, 0)', + 'darksalmon': 'rgb(233, 150, 122)', + 'darkseagreen': 'rgb(143, 188, 143)', + 'darkslateblue': 'rgb( 72, 61, 139)', + 'darkslategray': 'rgb( 47, 79, 79)', + 'darkslategrey': 'rgb( 47, 79, 79)', + 'darkturquoise': 'rgb( 0, 206, 209)', + 'darkviolet': 'rgb(148, 0, 211)', + 'deeppink': 'rgb(255, 20, 147)', + 'deepskyblue': 'rgb( 0, 191, 255)', + 'dimgray': 'rgb(105, 105, 105)', + 'dimgrey': 'rgb(105, 105, 105)', + 'dodgerblue': 'rgb( 30, 144, 255)', + 'firebrick': 'rgb(178, 34, 34)', + 'floralwhite': 'rgb(255, 250, 240)', + 'forestgreen': 'rgb( 34, 139, 34)', + 'fuchsia': 'rgb(255, 0, 255)', + 'gainsboro': 'rgb(220, 220, 220)', + 'ghostwhite': 'rgb(248, 248, 255)', + 'gold': 'rgb(255, 215, 0)', + 'goldenrod': 'rgb(218, 165, 32)', + 'gray': 'rgb(128, 128, 128)', + 'grey': 'rgb(128, 128, 128)', + 'green': 'rgb( 0, 128, 0)', + 'greenyellow': 'rgb(173, 255, 47)', + 'honeydew': 'rgb(240, 255, 240)', + 'hotpink': 'rgb(255, 105, 180)', + 'indianred': 'rgb(205, 92, 92)', + 'indigo': 'rgb( 75, 0, 130)', + 'ivory': 'rgb(255, 255, 240)', + 'khaki': 'rgb(240, 230, 140)', + 'lavender': 'rgb(230, 230, 250)', + 'lavenderblush': 'rgb(255, 240, 245)', + 'lawngreen': 'rgb(124, 252, 0)', + 'lemonchiffon': 'rgb(255, 250, 205)', + 'lightblue': 'rgb(173, 216, 230)', + 'lightcoral': 'rgb(240, 128, 128)', + 'lightcyan': 'rgb(224, 255, 255)', + 'lightgoldenrodyellow': 'rgb(250, 250, 210)', + 'lightgray': 'rgb(211, 211, 211)', + 'lightgreen': 'rgb(144, 238, 144)', + 'lightgrey': 'rgb(211, 211, 211)', + 'lightpink': 'rgb(255, 182, 193)', + 'lightsalmon': 'rgb(255, 160, 122)', + 'lightseagreen': 'rgb( 32, 178, 170)', + 'lightskyblue': 'rgb(135, 206, 250)', + 'lightslategray': 'rgb(119, 136, 153)', + 'lightslategrey': 'rgb(119, 136, 153)', + 'lightsteelblue': 'rgb(176, 196, 222)', + 'lightyellow': 'rgb(255, 255, 224)', + 'lime': 'rgb( 0, 255, 0)', + 'limegreen': 'rgb( 50, 205, 50)', + 'linen': 'rgb(250, 240, 230)', + 'magenta': 'rgb(255, 0, 255)', + 'maroon': 'rgb(128, 0, 0)', + 'mediumaquamarine': 'rgb(102, 205, 170)', + 'mediumblue': 'rgb( 0, 0, 205)', + 'mediumorchid': 'rgb(186, 85, 211)', + 'mediumpurple': 'rgb(147, 112, 219)', + 'mediumseagreen': 'rgb( 60, 179, 113)', + 'mediumslateblue': 'rgb(123, 104, 238)', + 'mediumspringgreen': 'rgb( 0, 250, 154)', + 'mediumturquoise': 'rgb( 72, 209, 204)', + 'mediumvioletred': 'rgb(199, 21, 133)', + 'midnightblue': 'rgb( 25, 25, 112)', + 'mintcream': 'rgb(245, 255, 250)', + 'mistyrose': 'rgb(255, 228, 225)', + 'moccasin': 'rgb(255, 228, 181)', + 'navajowhite': 'rgb(255, 222, 173)', + 'navy': 'rgb( 0, 0, 128)', + 'oldlace': 'rgb(253, 245, 230)', + 'olive': 'rgb(128, 128, 0)', + 'olivedrab': 'rgb(107, 142, 35)', + 'orange': 'rgb(255, 165, 0)', + 'orangered': 'rgb(255, 69, 0)', + 'orchid': 'rgb(218, 112, 214)', + 'palegoldenrod': 'rgb(238, 232, 170)', + 'palegreen': 'rgb(152, 251, 152)', + 'paleturquoise': 'rgb(175, 238, 238)', + 'palevioletred': 'rgb(219, 112, 147)', + 'papayawhip': 'rgb(255, 239, 213)', + 'peachpuff': 'rgb(255, 218, 185)', + 'peru': 'rgb(205, 133, 63)', + 'pink': 'rgb(255, 192, 203)', + 'plum': 'rgb(221, 160, 221)', + 'powderblue': 'rgb(176, 224, 230)', + 'purple': 'rgb(128, 0, 128)', + 'red': 'rgb(255, 0, 0)', + 'rosybrown': 'rgb(188, 143, 143)', + 'royalblue': 'rgb( 65, 105, 225)', + 'saddlebrown': 'rgb(139, 69, 19)', + 'salmon': 'rgb(250, 128, 114)', + 'sandybrown': 'rgb(244, 164, 96)', + 'seagreen': 'rgb( 46, 139, 87)', + 'seashell': 'rgb(255, 245, 238)', + 'sienna': 'rgb(160, 82, 45)', + 'silver': 'rgb(192, 192, 192)', + 'skyblue': 'rgb(135, 206, 235)', + 'slateblue': 'rgb(106, 90, 205)', + 'slategray': 'rgb(112, 128, 144)', + 'slategrey': 'rgb(112, 128, 144)', + 'snow': 'rgb(255, 250, 250)', + 'springgreen': 'rgb( 0, 255, 127)', + 'steelblue': 'rgb( 70, 130, 180)', + 'tan': 'rgb(210, 180, 140)', + 'teal': 'rgb( 0, 128, 128)', + 'thistle': 'rgb(216, 191, 216)', + 'tomato': 'rgb(255, 99, 71)', + 'turquoise': 'rgb( 64, 224, 208)', + 'violet': 'rgb(238, 130, 238)', + 'wheat': 'rgb(245, 222, 179)', + 'white': 'rgb(255, 255, 255)', + 'whitesmoke': 'rgb(245, 245, 245)', + 'yellow': 'rgb(255, 255, 0)', + 'yellowgreen': 'rgb(154, 205, 50)', +} -default_attributes = { # excluded all attributes with 'auto' as default - # SVG 1.1 presentation attributes - 'baseline-shift': 'baseline', - 'clip-path': 'none', - 'clip-rule': 'nonzero', - 'color': '#000', - 'color-interpolation-filters': 'linearRGB', - 'color-interpolation': 'sRGB', - 'direction': 'ltr', - 'display': 'inline', - 'enable-background': 'accumulate', - 'fill': '#000', - 'fill-opacity': '1', - 'fill-rule': 'nonzero', - 'filter': 'none', - 'flood-color': '#000', - 'flood-opacity': '1', - 'font-size-adjust': 'none', - 'font-size': 'medium', - 'font-stretch': 'normal', - 'font-style': 'normal', - 'font-variant': 'normal', - 'font-weight': 'normal', - 'glyph-orientation-horizontal': '0deg', - 'letter-spacing': 'normal', - 'lighting-color': '#fff', - 'marker': 'none', - 'marker-start': 'none', - 'marker-mid': 'none', - 'marker-end': 'none', - 'mask': 'none', - 'opacity': '1', - 'pointer-events': 'visiblePainted', - 'stop-color': '#000', - 'stop-opacity': '1', - 'stroke': 'none', - 'stroke-dasharray': 'none', - 'stroke-dashoffset': '0', - 'stroke-linecap': 'butt', - 'stroke-linejoin': 'miter', - 'stroke-miterlimit': '4', - 'stroke-opacity': '1', - 'stroke-width': '1', - 'text-anchor': 'start', - 'text-decoration': 'none', - 'unicode-bidi': 'normal', - 'visibility': 'visible', - 'word-spacing': 'normal', - 'writing-mode': 'lr-tb', - # SVG 1.2 tiny properties - 'audio-level': '1', - 'solid-color': '#000', - 'solid-opacity': '1', - 'text-align': 'start', - 'vector-effect': 'none', - 'viewport-fill': 'none', - 'viewport-fill-opacity': '1', - } +# A list of default poperties that are safe to remove +# +# Sources for this list: +# https://www.w3.org/TR/SVG/propidx.html (implemented) +# https://www.w3.org/TR/SVGTiny12/attributeTable.html (implemented) +# https://www.w3.org/TR/SVG2/propidx.html (not yet implemented) +# +default_properties = { # excluded all properties with 'auto' as default + # SVG 1.1 presentation attributes + 'baseline-shift': 'baseline', + 'clip-path': 'none', + 'clip-rule': 'nonzero', + 'color': '#000', + 'color-interpolation-filters': 'linearRGB', + 'color-interpolation': 'sRGB', + 'direction': 'ltr', + 'display': 'inline', + 'enable-background': 'accumulate', + 'fill': '#000', + 'fill-opacity': '1', + 'fill-rule': 'nonzero', + 'filter': 'none', + 'flood-color': '#000', + 'flood-opacity': '1', + 'font-size-adjust': 'none', + 'font-size': 'medium', + 'font-stretch': 'normal', + 'font-style': 'normal', + 'font-variant': 'normal', + 'font-weight': 'normal', + 'glyph-orientation-horizontal': '0deg', + 'letter-spacing': 'normal', + 'lighting-color': '#fff', + 'marker': 'none', + 'marker-start': 'none', + 'marker-mid': 'none', + 'marker-end': 'none', + 'mask': 'none', + 'opacity': '1', + 'pointer-events': 'visiblePainted', + 'stop-color': '#000', + 'stop-opacity': '1', + 'stroke': 'none', + 'stroke-dasharray': 'none', + 'stroke-dashoffset': '0', + 'stroke-linecap': 'butt', + 'stroke-linejoin': 'miter', + 'stroke-miterlimit': '4', + 'stroke-opacity': '1', + 'stroke-width': '1', + 'text-anchor': 'start', + 'text-decoration': 'none', + 'unicode-bidi': 'normal', + 'visibility': 'visible', + 'word-spacing': 'normal', + 'writing-mode': 'lr-tb', + # SVG 1.2 tiny properties + 'audio-level': '1', + 'solid-color': '#000', + 'solid-opacity': '1', + 'text-align': 'start', + 'vector-effect': 'none', + 'viewport-fill': 'none', + 'viewport-fill-opacity': '1', +} + + +def is_same_sign(a, b): + return (a <= 0 and b <= 0) or (a >= 0 and b >= 0) + + +def is_same_direction(x1, y1, x2, y2): + if is_same_sign(x1, x2) and is_same_sign(y1, y2): + diff = y1/x1 - y2/x2 + return scouringContext.plus(1 + diff) == 1 + else: + return False -def isSameSign(a,b): return (a <= 0 and b <= 0) or (a >= 0 and b >= 0) scinumber = re.compile(r"[-+]?(\d*\.?)?\d+[eE][-+]?\d+") number = re.compile(r"[-+]?(\d*\.?)?\d+") sciExponent = re.compile(r"[eE]([-+]?\d+)") unit = re.compile("(em|ex|px|pt|pc|cm|mm|in|%){1,1}$") + class Unit(object): - # Integer constants for units. - INVALID = -1 - NONE = 0 - PCT = 1 - PX = 2 - PT = 3 - PC = 4 - EM = 5 - EX = 6 - CM = 7 - MM = 8 - IN = 9 + # Integer constants for units. + INVALID = -1 + NONE = 0 + PCT = 1 + PX = 2 + PT = 3 + PC = 4 + EM = 5 + EX = 6 + CM = 7 + MM = 8 + IN = 9 - # String to Unit. Basically, converts unit strings to their integer constants. - s2u = { - '': NONE, - '%': PCT, - 'px': PX, - 'pt': PT, - 'pc': PC, - 'em': EM, - 'ex': EX, - 'cm': CM, - 'mm': MM, - 'in': IN, - } + # String to Unit. Basically, converts unit strings to their integer constants. + s2u = { + '': NONE, + '%': PCT, + 'px': PX, + 'pt': PT, + 'pc': PC, + 'em': EM, + 'ex': EX, + 'cm': CM, + 'mm': MM, + 'in': IN, + } - # Unit to String. Basically, converts unit integer constants to their corresponding strings. - u2s = { - NONE: '', - PCT: '%', - PX: 'px', - PT: 'pt', - PC: 'pc', - EM: 'em', - EX: 'ex', - CM: 'cm', - MM: 'mm', - IN: 'in', - } + # Unit to String. Basically, converts unit integer constants to their corresponding strings. + u2s = { + NONE: '', + PCT: '%', + PX: 'px', + PT: 'pt', + PC: 'pc', + EM: 'em', + EX: 'ex', + CM: 'cm', + MM: 'mm', + IN: 'in', + } # @staticmethod - def get(unitstr): - if unitstr is None: return Unit.NONE - try: - return Unit.s2u[unitstr] - except KeyError: - return Unit.INVALID + def get(unitstr): + if unitstr is None: + return Unit.NONE + try: + return Unit.s2u[unitstr] + except KeyError: + return Unit.INVALID # @staticmethod - def str(unitint): - try: - return Unit.u2s[unitint] - except KeyError: - return 'INVALID' + def str(unitint): + try: + return Unit.u2s[unitint] + except KeyError: + return 'INVALID' + + get = staticmethod(get) + str = staticmethod(str) - get = staticmethod(get) - str = staticmethod(str) class SVGLength(object): - def __init__(self, str): - try: # simple unitless and no scientific notation - self.value = float(str) - if int(self.value) == self.value: - self.value = int(self.value) - self.units = Unit.NONE - except ValueError: - # we know that the length string has an exponent, a unit, both or is invalid - # parse out number, exponent and unit - self.value = 0 - unitBegin = 0 - scinum = scinumber.match(str) - if scinum != None: - # this will always match, no need to check it - numMatch = number.match(str) - expMatch = sciExponent.search(str, numMatch.start(0)) - self.value = (float(numMatch.group(0)) * - 10 ** float(expMatch.group(1))) - unitBegin = expMatch.end(1) - else: - # unit or invalid - numMatch = number.match(str) - if numMatch != None: - self.value = float(numMatch.group(0)) - unitBegin = numMatch.end(0) + def __init__(self, str): + try: # simple unitless and no scientific notation + self.value = float(str) + if int(self.value) == self.value: + self.value = int(self.value) + self.units = Unit.NONE + except ValueError: + # we know that the length string has an exponent, a unit, both or is invalid - if int(self.value) == self.value: - self.value = int(self.value) - - if unitBegin != 0 : - unitMatch = unit.search(str, unitBegin) - if unitMatch != None : - self.units = Unit.get(unitMatch.group(0)) - - # invalid - else: - # TODO: this needs to set the default for the given attribute (how?) + # parse out number, exponent and unit self.value = 0 - self.units = Unit.INVALID + unitBegin = 0 + scinum = scinumber.match(str) + if scinum is not None: + # this will always match, no need to check it + numMatch = number.match(str) + expMatch = sciExponent.search(str, numMatch.start(0)) + self.value = (float(numMatch.group(0)) * + 10 ** float(expMatch.group(1))) + unitBegin = expMatch.end(1) + else: + # unit or invalid + numMatch = number.match(str) + if numMatch is not None: + self.value = float(numMatch.group(0)) + unitBegin = numMatch.end(0) + + if int(self.value) == self.value: + self.value = int(self.value) + + if unitBegin != 0: + unitMatch = unit.search(str, unitBegin) + if unitMatch is not None: + self.units = Unit.get(unitMatch.group(0)) + + # invalid + else: + # TODO: this needs to set the default for the given attribute (how?) + self.value = 0 + self.units = Unit.INVALID + def findElementsWithId(node, elems=None): - """ - Returns all elements with id attributes - """ - if elems is None: - elems = {} - id = node.getAttribute('id') - if id != '' : - elems[id] = node - if node.hasChildNodes() : - for child in node.childNodes: - # from http://www.w3.org/TR/DOM-Level-2-Core/idl-definitions.html - # we are only really interested in nodes of type Element (1) - if child.nodeType == 1 : - findElementsWithId(child, elems) - return elems + """ + Returns all elements with id attributes + """ + if elems is None: + elems = {} + id = node.getAttribute('id') + if id != '': + elems[id] = node + if node.hasChildNodes(): + for child in node.childNodes: + # from http://www.w3.org/TR/DOM-Level-2-Core/idl-definitions.html + # we are only really interested in nodes of type Element (1) + if child.nodeType == Node.ELEMENT_NODE: + findElementsWithId(child, elems) + return elems + + +referencingProps = ['fill', 'stroke', 'filter', 'clip-path', 'mask', 'marker-start', 'marker-end', 'marker-mid'] -referencingProps = ['fill', 'stroke', 'filter', 'clip-path', 'mask', 'marker-start', - 'marker-end', 'marker-mid'] def findReferencedElements(node, ids=None): - """ - Returns the number of times an ID is referenced as well as all elements - that reference it. node is the node at which to start the search. The - return value is a map which has the id as key and each value is an array - where the first value is a count and the second value is a list of nodes - that referenced it. + """ + Returns IDs of all referenced elements + - node is the node at which to start the search. + - returns a map which has the id as key and + each value is is a set of nodes - Currently looks at fill, stroke, clip-path, mask, marker, and - xlink:href attributes. - """ - global referencingProps - if ids is None: - ids = {} - # TODO: input argument ids is clunky here (see below how it is called) - # GZ: alternative to passing dict, use **kwargs + Currently looks at 'xlink:href' and all attributes in 'referencingProps' + """ + global referencingProps + if ids is None: + ids = {} + # TODO: input argument ids is clunky here (see below how it is called) + # GZ: alternative to passing dict, use **kwargs - # if this node is a style element, parse its text into CSS - if node.nodeName == 'style' and node.namespaceURI == NS['SVG']: - # one stretch of text, please! (we could use node.normalize(), but - # this actually modifies the node, and we don't want to keep - # whitespace around if there's any) - stylesheet = "".join([child.nodeValue for child in node.childNodes]) - if stylesheet != '': - cssRules = parseCssString(stylesheet) - for rule in cssRules: - for propname in rule['properties']: - propval = rule['properties'][propname] - findReferencingProperty(node, propname, propval, ids) - return ids + # if this node is a style element, parse its text into CSS + if node.nodeName == 'style' and node.namespaceURI == NS['SVG']: + # one stretch of text, please! (we could use node.normalize(), but + # this actually modifies the node, and we don't want to keep + # whitespace around if there's any) + stylesheet = "".join(child.nodeValue for child in node.childNodes) + if stylesheet != '': + cssRules = parseCssString(stylesheet) + for rule in cssRules: + for propname in rule['properties']: + propval = rule['properties'][propname] + findReferencingProperty(node, propname, propval, ids) + return ids - # else if xlink:href is set, then grab the id - href = node.getAttributeNS(NS['XLINK'],'href') - if href != '' and len(href) > 1 and href[0] == '#': - # we remove the hash mark from the beginning of the id - id = href[1:] - if id in ids: - ids[id][0] += 1 - ids[id][1].append(node) - else: - ids[id] = [1,[node]] + # else if xlink:href is set, then grab the id + href = node.getAttributeNS(NS['XLINK'], 'href') + if href != '' and len(href) > 1 and href[0] == '#': + # we remove the hash mark from the beginning of the id + id = href[1:] + if id in ids: + ids[id].add(node) + else: + ids[id] = {node} - # now get all style properties and the fill, stroke, filter attributes - styles = node.getAttribute('style').split(';') - for attr in referencingProps: - styles.append(':'.join([attr, node.getAttribute(attr)])) + # now get all style properties and the fill, stroke, filter attributes + styles = node.getAttribute('style').split(';') - for style in styles: - propval = style.split(':') - if len(propval) == 2 : - prop = propval[0].strip() - val = propval[1].strip() - findReferencingProperty(node, prop, val, ids) + for style in styles: + propval = style.split(':') + if len(propval) == 2: + prop = propval[0].strip() + val = propval[1].strip() + findReferencingProperty(node, prop, val, ids) + + for attr in referencingProps: + val = node.getAttribute(attr).strip() + if not val: + continue + findReferencingProperty(node, attr, val, ids) + + if node.hasChildNodes(): + for child in node.childNodes: + if child.nodeType == Node.ELEMENT_NODE: + findReferencedElements(child, ids) + return ids - if node.hasChildNodes() : - for child in node.childNodes: - if child.nodeType == 1 : - findReferencedElements(child, ids) - return ids def findReferencingProperty(node, prop, val, ids): - global referencingProps - if prop in referencingProps and val != '' : - if len(val) >= 7 and val[0:5] == 'url(#' : - id = val[5:val.find(')')] - if ids.has_key(id) : - ids[id][0] += 1 - ids[id][1].append(node) - else: - ids[id] = [1,[node]] - # if the url has a quote in it, we need to compensate - elif len(val) >= 8 : - id = None - # double-quote - if val[0:6] == 'url("#' : - id = val[6:val.find('")')] - # single-quote - elif val[0:6] == "url('#" : - id = val[6:val.find("')")] - if id != None: - if ids.has_key(id) : - ids[id][0] += 1 - ids[id][1].append(node) + global referencingProps + if prop in referencingProps and val != '': + if len(val) >= 7 and val[0:5] == 'url(#': + id = val[5:val.find(')')] + if id in ids: + ids[id].add(node) else: - ids[id] = [1,[node]] + ids[id] = {node} + # if the url has a quote in it, we need to compensate + elif len(val) >= 8: + id = None + # double-quote + if val[0:6] == 'url("#': + id = val[6:val.find('")')] + # single-quote + elif val[0:6] == "url('#": + id = val[6:val.find("')")] + if id is not None: + if id in ids: + ids[id].add(node) + else: + ids[id] = {node} -numIDsRemoved = 0 -numElemsRemoved = 0 -numAttrsRemoved = 0 -numRastersEmbedded = 0 -numPathSegmentsReduced = 0 -numCurvesStraightened = 0 -numBytesSavedInPathData = 0 -numBytesSavedInColors = 0 -numBytesSavedInIDs = 0 -numBytesSavedInLengths = 0 -numBytesSavedInTransforms = 0 -numPointsRemovedFromPolygon = 0 -numCommentBytes = 0 -def removeUnusedDefs(doc, defElem, elemsToRemove=None): - if elemsToRemove is None: - elemsToRemove = [] +def removeUnusedDefs(doc, defElem, elemsToRemove=None, referencedIDs=None): + if elemsToRemove is None: + elemsToRemove = [] - identifiedElements = findElementsWithId(doc.documentElement) - referencedIDs = findReferencedElements(doc.documentElement) + # removeUnusedDefs do not change the XML itself; therefore there is no point in + # recomputing findReferencedElements when we recurse into child nodes. + if referencedIDs is None: + referencedIDs = findReferencedElements(doc.documentElement) - keepTags = ['font', 'style', 'metadata', 'script', 'title', 'desc'] - for elem in defElem.childNodes: - # only look at it if an element and not referenced anywhere else - if elem.nodeType == 1 and (elem.getAttribute('id') == '' or \ - (not elem.getAttribute('id') in referencedIDs)): + keepTags = ['font', 'style', 'metadata', 'script', 'title', 'desc'] + for elem in defElem.childNodes: + # only look at it if an element and not referenced anywhere else + if elem.nodeType != Node.ELEMENT_NODE: + continue - # we only inspect the children of a group in a defs if the group - # is not referenced anywhere else - if elem.nodeName == 'g' and elem.namespaceURI == NS['SVG']: - elemsToRemove = removeUnusedDefs(doc, elem, elemsToRemove) - # we only remove if it is not one of our tags we always keep (see above) - elif not elem.nodeName in keepTags: - elemsToRemove.append(elem) - return elemsToRemove + elem_id = elem.getAttribute('id') -def removeUnreferencedElements(doc): - """ - Removes all unreferenced elements except for <svg>, <font>, <metadata>, <title>, and <desc>. - Also vacuums the defs of any non-referenced renderable elements. + if elem_id == '' or elem_id not in referencedIDs: + # we only inspect the children of a group in a defs if the group + # is not referenced anywhere else + if elem.nodeName == 'g' and elem.namespaceURI == NS['SVG']: + elemsToRemove = removeUnusedDefs(doc, elem, elemsToRemove, referencedIDs=referencedIDs) + # we only remove if it is not one of our tags we always keep (see above) + elif elem.nodeName not in keepTags: + elemsToRemove.append(elem) + return elemsToRemove - Returns the number of unreferenced elements removed from the document. - """ - global numElemsRemoved - num = 0 - # Remove certain unreferenced elements outside of defs - removeTags = ['linearGradient', 'radialGradient', 'pattern'] - identifiedElements = findElementsWithId(doc.documentElement) - referencedIDs = findReferencedElements(doc.documentElement) +def remove_unreferenced_elements(doc, keepDefs, stats): + """ + Removes all unreferenced elements except for <svg>, <font>, <metadata>, <title>, and <desc>. + Also vacuums the defs of any non-referenced renderable elements. - for id in identifiedElements: - if not id in referencedIDs: - goner = identifiedElements[id] - if goner != None and goner.parentNode != None and goner.nodeName in removeTags: - goner.parentNode.removeChild(goner) - num += 1 - numElemsRemoved += 1 + Returns the number of unreferenced elements removed from the document. + """ + num = 0 - # Remove most unreferenced elements inside defs - defs = doc.documentElement.getElementsByTagName('defs') - for aDef in defs: - elemsToRemove = removeUnusedDefs(doc, aDef) - for elem in elemsToRemove: - elem.parentNode.removeChild(elem) - numElemsRemoved += 1 - num += 1 - return num + # Remove certain unreferenced elements outside of defs + removeTags = ['linearGradient', 'radialGradient', 'pattern'] + identifiedElements = findElementsWithId(doc.documentElement) + referencedIDs = findReferencedElements(doc.documentElement) -def shortenIDs(doc, prefix, unprotectedElements=None): - """ - Shortens ID names used in the document. ID names referenced the most often are assigned the - shortest ID names. - If the list unprotectedElements is provided, only IDs from this list will be shortened. + if not keepDefs: + # Remove most unreferenced elements inside defs + defs = doc.documentElement.getElementsByTagName('defs') + for aDef in defs: + elemsToRemove = removeUnusedDefs(doc, aDef, referencedIDs=referencedIDs) + for elem in elemsToRemove: + elem.parentNode.removeChild(elem) + stats.num_elements_removed += len(elemsToRemove) + num += len(elemsToRemove) - Returns the number of bytes saved by shortening ID names in the document. - """ - num = 0 + for id in identifiedElements: + if id not in referencedIDs: + goner = identifiedElements[id] + if (goner is not None and goner.nodeName in removeTags + and goner.parentNode is not None + and goner.parentNode.tagName != 'defs'): + goner.parentNode.removeChild(goner) + num += 1 + stats.num_elements_removed += 1 - identifiedElements = findElementsWithId(doc.documentElement) - if unprotectedElements is None: - unprotectedElements = identifiedElements - referencedIDs = findReferencedElements(doc.documentElement) + return num - # Make idList (list of idnames) sorted by reference count - # descending, so the highest reference count is first. - # First check that there's actually a defining element for the current ID name. - # (Cyn: I've seen documents with #id references but no element with that ID!) - idList = [(referencedIDs[rid][0], rid) for rid in referencedIDs - if rid in unprotectedElements] - idList.sort(reverse=True) - idList = [rid for count, rid in idList] - curIdNum = 1 +def shortenIDs(doc, prefix, options): + """ + Shortens ID names used in the document. ID names referenced the most often are assigned the + shortest ID names. - for rid in idList: - curId = intToID(curIdNum, prefix) - # First make sure that *this* element isn't already using - # the ID name we want to give it. - if curId != rid: - # Then, skip ahead if the new ID is already in identifiedElement. - while curId in identifiedElements: + Returns the number of bytes saved by shortening ID names in the document. + """ + num = 0 + + identifiedElements = findElementsWithId(doc.documentElement) + # This map contains maps the (original) ID to the nodes referencing it. + # At the end of this function, it will no longer be valid and while we + # could keep it up to date, it will complicate the code for no gain + # (as we do not reuse the data structure beyond this function). + referencedIDs = findReferencedElements(doc.documentElement) + + # Make idList (list of idnames) sorted by reference count + # descending, so the highest reference count is first. + # First check that there's actually a defining element for the current ID name. + # (Cyn: I've seen documents with #id references but no element with that ID!) + idList = [(len(referencedIDs[rid]), rid) for rid in referencedIDs + if rid in identifiedElements] + idList.sort(reverse=True) + idList = [rid for count, rid in idList] + + # Add unreferenced IDs to end of idList in arbitrary order + idList.extend([rid for rid in identifiedElements if rid not in idList]) + # Ensure we do not reuse a protected ID by accident + protectedIDs = protected_ids(identifiedElements, options) + # IDs that have been allocated and should not be remapped. + consumedIDs = set() + + # List of IDs that need to be assigned a new ID. The list is ordered + # such that earlier entries will be assigned a shorter ID than those + # later in the list. IDs in this list *can* obtain an ID that is + # longer than they already are. + need_new_id = [] + + id_allocations = list(compute_id_lengths(len(idList) + 1)) + # Reverse so we can use it as a stack and still work from "shortest to + # longest" ID. + id_allocations.reverse() + + # Here we loop over all current IDs (that we /might/ want to remap) + # and group them into two. 1) The IDs that already have a perfect + # length (these are added to consumedIDs) and 2) the IDs that need + # to change length (these are appended to need_new_id). + optimal_id_length, id_use_limit = 0, 0 + for current_id in idList: + # If we are out of IDs of the current length, then move on + # to the next length + if id_use_limit < 1: + optimal_id_length, id_use_limit = id_allocations.pop() + # Reserve an ID from this length + id_use_limit -= 1 + # We check for strictly equal to optimal length because our ID + # remapping may have to assign one node a longer ID because + # another node needs a shorter ID. + if len(current_id) == optimal_id_length: + # This rid is already of optimal length - lets just keep it. + consumedIDs.add(current_id) + else: + # Needs a new (possibly longer) ID. + need_new_id.append(current_id) + + curIdNum = 1 + + for old_id in need_new_id: + new_id = intToID(curIdNum, prefix) + + # Skip ahead if the new ID has already been used or is protected. + while new_id in protectedIDs or new_id in consumedIDs: curIdNum += 1 - curId = intToID(curIdNum, prefix) - # Then go rename it. - num += renameID(doc, rid, curId, identifiedElements, referencedIDs) - curIdNum += 1 + new_id = intToID(curIdNum, prefix) + + # Now that we have found the first available ID, do the remap. + num += renameID(old_id, new_id, identifiedElements, referencedIDs.get(old_id)) + curIdNum += 1 + + return num + + +def compute_id_lengths(highest): + """Compute how many IDs are available of a given size + + Example: + >>> lengths = list(compute_id_lengths(512)) + >>> lengths + [(1, 26), (2, 676)] + >>> total_limit = sum(x[1] for x in lengths) + >>> total_limit + 702 + >>> intToID(total_limit, '') + 'zz' + + Which tells us that we got 26 IDs of length 1 and up to 676 IDs of length two + if we need to allocate 512 IDs. + + :param highest: Highest ID that need to be allocated + :return: An iterator that returns tuples of (id-length, use-limit). The + use-limit applies only to the given id-length (i.e. it is excluding IDs + of shorter length). Note that the sum of the use-limit values is always + equal to or greater than the highest param. + """ + step = 26 + id_length = 0 + use_limit = 1 + while highest: + id_length += 1 + use_limit *= step + yield (id_length, use_limit) + highest = int((highest - 1) / step) - return num def intToID(idnum, prefix): - """ - Returns the ID name for the given ID number, spreadsheet-style, i.e. from a to z, - then from aa to az, ba to bz, etc., until zz. - """ - rid = '' + """ + Returns the ID name for the given ID number, spreadsheet-style, i.e. from a to z, + then from aa to az, ba to bz, etc., until zz. + """ + rid = '' - while idnum > 0: - idnum -= 1 - rid = chr((idnum % 26) + ord('a')) + rid - idnum = int(idnum / 26) + while idnum > 0: + idnum -= 1 + rid = chr((idnum % 26) + ord('a')) + rid + idnum = int(idnum / 26) - return prefix + rid + return prefix + rid -def renameID(doc, idFrom, idTo, identifiedElements, referencedIDs): - """ - Changes the ID name from idFrom to idTo, on the declaring element - as well as all references in the document doc. - Updates identifiedElements and referencedIDs. - Does not handle the case where idTo is already the ID name - of another element in doc. +def renameID(idFrom, idTo, identifiedElements, referringNodes): + """ + Changes the ID name from idFrom to idTo, on the declaring element + as well as all nodes in referringNodes. - Returns the number of bytes saved by this replacement. - """ + Updates identifiedElements. - num = 0 + Returns the number of bytes saved by this replacement. + """ - definingNode = identifiedElements[idFrom] - definingNode.setAttribute("id", idTo) - del identifiedElements[idFrom] - identifiedElements[idTo] = definingNode + num = 0 - referringNodes = referencedIDs[idFrom] + definingNode = identifiedElements[idFrom] + definingNode.setAttribute("id", idTo) + num += len(idFrom) - len(idTo) - # Look for the idFrom ID name in each of the referencing elements, - # exactly like findReferencedElements would. - # Cyn: Duplicated processing! + # Update references to renamed node + if referringNodes is not None: - for node in referringNodes[1]: - # if this node is a style element, parse its text into CSS - if node.nodeName == 'style' and node.namespaceURI == NS['SVG']: - # node.firstChild will be either a CDATA or a Text node now - if node.firstChild != None: - # concatenate the value of all children, in case - # there's a CDATASection node surrounded by whitespace - # nodes - # (node.normalize() will NOT work here, it only acts on Text nodes) - oldValue = "".join([child.nodeValue for child in node.childNodes]) - # not going to reparse the whole thing - newValue = oldValue.replace('url(#' + idFrom + ')', 'url(#' + idTo + ')') - newValue = newValue.replace("url(#'" + idFrom + "')", 'url(#' + idTo + ')') - newValue = newValue.replace('url(#"' + idFrom + '")', 'url(#' + idTo + ')') - # and now replace all the children with this new stylesheet. - # again, this is in case the stylesheet was a CDATASection - node.childNodes[:] = [node.ownerDocument.createTextNode(newValue)] - num += len(oldValue) - len(newValue) + # Look for the idFrom ID name in each of the referencing elements, + # exactly like findReferencedElements would. + # Cyn: Duplicated processing! - # if xlink:href is set to #idFrom, then change the id - href = node.getAttributeNS(NS['XLINK'],'href') - if href == '#' + idFrom: - node.setAttributeNS(NS['XLINK'],'href', '#' + idTo) - num += len(idFrom) - len(idTo) + for node in referringNodes: + # if this node is a style element, parse its text into CSS + if node.nodeName == 'style' and node.namespaceURI == NS['SVG']: + # node.firstChild will be either a CDATA or a Text node now + if node.firstChild is not None: + # concatenate the value of all children, in case + # there's a CDATASection node surrounded by whitespace + # nodes + # (node.normalize() will NOT work here, it only acts on Text nodes) + oldValue = "".join(child.nodeValue for child in node.childNodes) + # not going to reparse the whole thing + newValue = oldValue.replace('url(#' + idFrom + ')', 'url(#' + idTo + ')') + newValue = newValue.replace("url(#'" + idFrom + "')", 'url(#' + idTo + ')') + newValue = newValue.replace('url(#"' + idFrom + '")', 'url(#' + idTo + ')') + # and now replace all the children with this new stylesheet. + # again, this is in case the stylesheet was a CDATASection + node.childNodes[:] = [node.ownerDocument.createTextNode(newValue)] + num += len(oldValue) - len(newValue) - # if the style has url(#idFrom), then change the id - styles = node.getAttribute('style') - if styles != '': - newValue = styles.replace('url(#' + idFrom + ')', 'url(#' + idTo + ')') - newValue = newValue.replace("url('#" + idFrom + "')", 'url(#' + idTo + ')') - newValue = newValue.replace('url("#' + idFrom + '")', 'url(#' + idTo + ')') - node.setAttribute('style', newValue) - num += len(styles) - len(newValue) + # if xlink:href is set to #idFrom, then change the id + href = node.getAttributeNS(NS['XLINK'], 'href') + if href == '#' + idFrom: + node.setAttributeNS(NS['XLINK'], 'href', '#' + idTo) + num += len(idFrom) - len(idTo) - # now try the fill, stroke, filter attributes - for attr in referencingProps: - oldValue = node.getAttribute(attr) - if oldValue != '': - newValue = oldValue.replace('url(#' + idFrom + ')', 'url(#' + idTo + ')') - newValue = newValue.replace("url('#" + idFrom + "')", 'url(#' + idTo + ')') - newValue = newValue.replace('url("#' + idFrom + '")', 'url(#' + idTo + ')') - node.setAttribute(attr, newValue) - num += len(oldValue) - len(newValue) + # if the style has url(#idFrom), then change the id + styles = node.getAttribute('style') + if styles != '': + newValue = styles.replace('url(#' + idFrom + ')', 'url(#' + idTo + ')') + newValue = newValue.replace("url('#" + idFrom + "')", 'url(#' + idTo + ')') + newValue = newValue.replace('url("#' + idFrom + '")', 'url(#' + idTo + ')') + node.setAttribute('style', newValue) + num += len(styles) - len(newValue) - del referencedIDs[idFrom] - referencedIDs[idTo] = referringNodes + # now try the fill, stroke, filter attributes + for attr in referencingProps: + oldValue = node.getAttribute(attr) + if oldValue != '': + newValue = oldValue.replace('url(#' + idFrom + ')', 'url(#' + idTo + ')') + newValue = newValue.replace("url('#" + idFrom + "')", 'url(#' + idTo + ')') + newValue = newValue.replace('url("#' + idFrom + '")', 'url(#' + idTo + ')') + node.setAttribute(attr, newValue) + num += len(oldValue) - len(newValue) + + return num + + +def protected_ids(seenIDs, options): + """Return a list of protected IDs out of the seenIDs""" + protectedIDs = [] + if options.protect_ids_prefix or options.protect_ids_noninkscape or options.protect_ids_list: + protect_ids_prefixes = [] + protect_ids_list = [] + if options.protect_ids_list: + protect_ids_list = options.protect_ids_list.split(",") + if options.protect_ids_prefix: + protect_ids_prefixes = options.protect_ids_prefix.split(",") + for id in seenIDs: + protected = False + if options.protect_ids_noninkscape and not id[-1].isdigit(): + protected = True + elif protect_ids_list and id in protect_ids_list: + protected = True + elif protect_ids_prefixes: + if any(id.startswith(prefix) for prefix in protect_ids_prefixes): + protected = True + if protected: + protectedIDs.append(id) + return protectedIDs - return num def unprotected_ids(doc, options): - u"""Returns a list of unprotected IDs within the document doc.""" - identifiedElements = findElementsWithId(doc.documentElement) - if not (options.protect_ids_noninkscape or - options.protect_ids_list or - options.protect_ids_prefix): - return identifiedElements - if options.protect_ids_list: - protect_ids_list = options.protect_ids_list.split(",") - if options.protect_ids_prefix: - protect_ids_prefixes = options.protect_ids_prefix.split(",") - for id in identifiedElements.keys(): - protected = False - if options.protect_ids_noninkscape and not id[-1].isdigit(): - protected = True - if options.protect_ids_list and id in protect_ids_list: - protected = True - if options.protect_ids_prefix: - for prefix in protect_ids_prefixes: - if id.startswith(prefix): - protected = True - if protected: - del identifiedElements[id] - return identifiedElements + u"""Returns a list of unprotected IDs within the document doc.""" + identifiedElements = findElementsWithId(doc.documentElement) + protectedIDs = protected_ids(identifiedElements, options) + if protectedIDs: + for id in protectedIDs: + del identifiedElements[id] + return identifiedElements -def removeUnreferencedIDs(referencedIDs, identifiedElements): - """ - Removes the unreferenced ID attributes. - Returns the number of ID attributes removed - """ - global numIDsRemoved - keepTags = ['font'] - num = 0; - for id in identifiedElements.keys(): - node = identifiedElements[id] - if referencedIDs.has_key(id) == False and not node.nodeName in keepTags: - node.removeAttribute('id') - numIDsRemoved += 1 - num += 1 - return num +def remove_unreferenced_ids(referencedIDs, identifiedElements): + """ + Removes the unreferenced ID attributes. + + Returns the number of ID attributes removed + """ + keepTags = ['font'] + num = 0 + for id in identifiedElements: + node = identifiedElements[id] + if id not in referencedIDs and node.nodeName not in keepTags: + node.removeAttribute('id') + num += 1 + return num + def removeNamespacedAttributes(node, namespaces): - global numAttrsRemoved - num = 0 - if node.nodeType == 1 : - # remove all namespace'd attributes from this element - attrList = node.attributes - attrsToRemove = [] - for attrNum in xrange(attrList.length): - attr = attrList.item(attrNum) - if attr != None and attr.namespaceURI in namespaces: - attrsToRemove.append(attr.nodeName) - for attrName in attrsToRemove : - num += 1 - numAttrsRemoved += 1 - node.removeAttribute(attrName) + num = 0 + if node.nodeType == Node.ELEMENT_NODE: + # remove all namespace'd attributes from this element + attrList = node.attributes + attrsToRemove = [] + for attrNum in range(attrList.length): + attr = attrList.item(attrNum) + if attr is not None and attr.namespaceURI in namespaces: + attrsToRemove.append(attr.nodeName) + for attrName in attrsToRemove: + node.removeAttribute(attrName) + num += len(attrsToRemove) + + # now recurse for children + for child in node.childNodes: + num += removeNamespacedAttributes(child, namespaces) + return num - # now recurse for children - for child in node.childNodes: - num += removeNamespacedAttributes(child, namespaces) - return num def removeNamespacedElements(node, namespaces): - global numElemsRemoved - num = 0 - if node.nodeType == 1 : - # remove all namespace'd child nodes from this element - childList = node.childNodes - childrenToRemove = [] - for child in childList: - if child != None and child.namespaceURI in namespaces: - childrenToRemove.append(child) - for child in childrenToRemove : - num += 1 - numElemsRemoved += 1 - node.removeChild(child) + num = 0 + if node.nodeType == Node.ELEMENT_NODE: + # remove all namespace'd child nodes from this element + childList = node.childNodes + childrenToRemove = [] + for child in childList: + if child is not None and child.namespaceURI in namespaces: + childrenToRemove.append(child) + for child in childrenToRemove: + node.removeChild(child) + num += len(childrenToRemove) - # now recurse for children - for child in node.childNodes: - num += removeNamespacedElements(child, namespaces) - return num + # now recurse for children + for child in node.childNodes: + num += removeNamespacedElements(child, namespaces) + return num -def removeMetadataElements(doc): - global numElemsRemoved - num = 0 - # clone the list, as the tag list is live from the DOM - elementsToRemove = [element for element in doc.documentElement.getElementsByTagName('metadata')] - for element in elementsToRemove: - element.parentNode.removeChild(element) - num += 1 - numElemsRemoved += 1 +def remove_descriptive_elements(doc, options): + elementTypes = [] + if options.remove_descriptive_elements: + elementTypes.extend(("title", "desc", "metadata")) + else: + if options.remove_titles: + elementTypes.append("title") + if options.remove_descriptions: + elementTypes.append("desc") + if options.remove_metadata: + elementTypes.append("metadata") + if not elementTypes: + return 0 - return num + elementsToRemove = [] + for elementType in elementTypes: + elementsToRemove.extend(doc.documentElement.getElementsByTagName(elementType)) -def removeNestedGroups(node): - """ - This walks further and further down the tree, removing groups - which do not have any attributes or a title/desc child and - promoting their children up one level - """ - global numElemsRemoved - num = 0 + for element in elementsToRemove: + element.parentNode.removeChild(element) - groupsToRemove = [] - # Only consider <g> elements for promotion if this element isn't a <switch>. - # (partial fix for bug 594930, required by the SVG spec however) - if not (node.nodeType == 1 and node.nodeName == 'switch'): - for child in node.childNodes: - if child.nodeName == 'g' and child.namespaceURI == NS['SVG'] and len(child.attributes) == 0: - # only collapse group if it does not have a title or desc as a direct descendant, - for grandchild in child.childNodes: - if grandchild.nodeType == 1 and grandchild.namespaceURI == NS['SVG'] and \ - grandchild.nodeName in ['title','desc']: - break - else: - groupsToRemove.append(child) + return len(elementsToRemove) - for g in groupsToRemove: - while g.childNodes.length > 0: - g.parentNode.insertBefore(g.firstChild, g) - g.parentNode.removeChild(g) - numElemsRemoved += 1 - num += 1 - # now recurse for children - for child in node.childNodes: - if child.nodeType == 1: - num += removeNestedGroups(child) - return num +def g_tag_is_mergeable(node): + """Check if a <g> tag can be merged or not + + <g> tags with a title or descriptions should generally be left alone. + """ + if any( + True for n in node.childNodes + if n.nodeType == Node.ELEMENT_NODE and n.nodeName in ('title', 'desc') + and n.namespaceURI == NS['SVG'] + ): + return False + return True + + +def remove_nested_groups(node, stats): + """ + This walks further and further down the tree, removing groups + which do not have any attributes or a title/desc child and + promoting their children up one level + """ + num = 0 + + groupsToRemove = [] + # Only consider <g> elements for promotion if this element isn't a <switch>. + # (partial fix for bug 594930, required by the SVG spec however) + if not (node.nodeType == Node.ELEMENT_NODE and node.nodeName == 'switch'): + for child in node.childNodes: + if child.nodeName == 'g' and child.namespaceURI == NS['SVG'] and len(child.attributes) == 0: + # only collapse group if it does not have a title or desc as a direct descendant, + if g_tag_is_mergeable(child): + groupsToRemove.append(child) + + for g in groupsToRemove: + while g.childNodes.length > 0: + g.parentNode.insertBefore(g.firstChild, g) + g.parentNode.removeChild(g) + + num += len(groupsToRemove) + stats.num_elements_removed += len(groupsToRemove) + + # now recurse for children + for child in node.childNodes: + if child.nodeType == Node.ELEMENT_NODE: + num += remove_nested_groups(child, stats) + return num + def moveCommonAttributesToParentGroup(elem, referencedElements): - """ - This recursively calls this function on all children of the passed in element - and then iterates over all child elements and removes common inheritable attributes - from the children and places them in the parent group. But only if the parent contains - nothing but element children and whitespace. The attributes are only removed from the - children if the children are not referenced by other elements in the document. - """ - num = 0 + """ + This recursively calls this function on all children of the passed in element + and then iterates over all child elements and removes common inheritable attributes + from the children and places them in the parent group. But only if the parent contains + nothing but element children and whitespace. The attributes are only removed from the + children if the children are not referenced by other elements in the document. + """ + num = 0 - childElements = [] - # recurse first into the children (depth-first) - for child in elem.childNodes: - if child.nodeType == 1: - # only add and recurse if the child is not referenced elsewhere - if not child.getAttribute('id') in referencedElements: - childElements.append(child) - num += moveCommonAttributesToParentGroup(child, referencedElements) - # else if the parent has non-whitespace text children, do not - # try to move common attributes - elif child.nodeType == 3 and child.nodeValue.strip(): - return num + childElements = [] + # recurse first into the children (depth-first) + for child in elem.childNodes: + if child.nodeType == Node.ELEMENT_NODE: + # only add and recurse if the child is not referenced elsewhere + if not child.getAttribute('id') in referencedElements: + childElements.append(child) + num += moveCommonAttributesToParentGroup(child, referencedElements) + # else if the parent has non-whitespace text children, do not + # try to move common attributes + elif child.nodeType == Node.TEXT_NODE and child.nodeValue.strip(): + return num - # only process the children if there are more than one element - if len(childElements) <= 1: return num + # only process the children if there are more than one element + if len(childElements) <= 1: + return num - commonAttrs = {} - # add all inheritable properties of the first child element - # FIXME: Note there is a chance that the first child is a set/animate in which case - # its fill attribute is not what we want to look at, we should look for the first - # non-animate/set element - attrList = childElements[0].attributes - for num in xrange(attrList.length): - attr = attrList.item(num) - # this is most of the inheritable properties from http://www.w3.org/TR/SVG11/propidx.html - # and http://www.w3.org/TR/SVGTiny12/attributeTable.html - if attr.nodeName in ['clip-rule', - 'display-align', - 'fill', 'fill-opacity', 'fill-rule', - 'font', 'font-family', 'font-size', 'font-size-adjust', 'font-stretch', - 'font-style', 'font-variant', 'font-weight', - 'letter-spacing', - 'pointer-events', 'shape-rendering', - 'stroke', 'stroke-dasharray', 'stroke-dashoffset', 'stroke-linecap', 'stroke-linejoin', - 'stroke-miterlimit', 'stroke-opacity', 'stroke-width', - 'text-anchor', 'text-decoration', 'text-rendering', 'visibility', - 'word-spacing', 'writing-mode']: - # we just add all the attributes from the first child - commonAttrs[attr.nodeName] = attr.nodeValue + commonAttrs = {} + # add all inheritable properties of the first child element + # FIXME: Note there is a chance that the first child is a set/animate in which case + # its fill attribute is not what we want to look at, we should look for the first + # non-animate/set element + attrList = childElements[0].attributes + for index in range(attrList.length): + attr = attrList.item(index) + # this is most of the inheritable properties from http://www.w3.org/TR/SVG11/propidx.html + # and http://www.w3.org/TR/SVGTiny12/attributeTable.html + if attr.nodeName in ['clip-rule', + 'display-align', + 'fill', 'fill-opacity', 'fill-rule', + 'font', 'font-family', 'font-size', 'font-size-adjust', 'font-stretch', + 'font-style', 'font-variant', 'font-weight', + 'letter-spacing', + 'pointer-events', 'shape-rendering', + 'stroke', 'stroke-dasharray', 'stroke-dashoffset', 'stroke-linecap', 'stroke-linejoin', + 'stroke-miterlimit', 'stroke-opacity', 'stroke-width', + 'text-anchor', 'text-decoration', 'text-rendering', 'visibility', + 'word-spacing', 'writing-mode']: + # we just add all the attributes from the first child + commonAttrs[attr.nodeName] = attr.nodeValue - # for each subsequent child element - for childNum in xrange(len(childElements)): - # skip first child - if childNum == 0: - continue + # for each subsequent child element + for childNum in range(len(childElements)): + # skip first child + if childNum == 0: + continue - child = childElements[childNum] - # if we are on an animateXXX/set element, ignore it (due to the 'fill' attribute) - if child.localName in ['set', 'animate', 'animateColor', 'animateTransform', 'animateMotion']: - continue + child = childElements[childNum] + # if we are on an animateXXX/set element, ignore it (due to the 'fill' attribute) + if child.localName in ['set', 'animate', 'animateColor', 'animateTransform', 'animateMotion']: + continue - distinctAttrs = [] - # loop through all current 'common' attributes - for name in commonAttrs.keys(): - # if this child doesn't match that attribute, schedule it for removal - if child.getAttribute(name) != commonAttrs[name]: - distinctAttrs.append(name) - # remove those attributes which are not common - for name in distinctAttrs: - del commonAttrs[name] + distinctAttrs = [] + # loop through all current 'common' attributes + for name in commonAttrs: + # if this child doesn't match that attribute, schedule it for removal + if child.getAttribute(name) != commonAttrs[name]: + distinctAttrs.append(name) + # remove those attributes which are not common + for name in distinctAttrs: + del commonAttrs[name] - # commonAttrs now has all the inheritable attributes which are common among all child elements - for name in commonAttrs.keys(): - for child in childElements: - child.removeAttribute(name) - elem.setAttribute(name, commonAttrs[name]) + # commonAttrs now has all the inheritable attributes which are common among all child elements + for name in commonAttrs: + for child in childElements: + child.removeAttribute(name) + elem.setAttribute(name, commonAttrs[name]) - # update our statistic (we remove N*M attributes and add back in M attributes) - num += (len(childElements)-1) * len(commonAttrs) - return num + # update our statistic (we remove N*M attributes and add back in M attributes) + num += (len(childElements) - 1) * len(commonAttrs) + return num -def createGroupsForCommonAttributes(elem): - """ - Creates <g> elements to contain runs of 3 or more - consecutive child elements having at least one common attribute. - Common attributes are not promoted to the <g> by this function. - This is handled by moveCommonAttributesToParentGroup. +def mergeSiblingGroupsWithCommonAttributes(elem): + """ + Merge two or more sibling <g> elements with the identical attributes. - If all children have a common attribute, an extra <g> is not created. + This function acts recursively on the given element. + """ - This function acts recursively on the given element. - """ - num = 0 - global numElemsRemoved - - # TODO perhaps all of the Presentation attributes in http://www.w3.org/TR/SVG/struct.html#GElement - # could be added here - # Cyn: These attributes are the same as in moveAttributesToParentGroup, and must always be - for curAttr in ['clip-rule', - 'display-align', - 'fill', 'fill-opacity', 'fill-rule', - 'font', 'font-family', 'font-size', 'font-size-adjust', 'font-stretch', - 'font-style', 'font-variant', 'font-weight', - 'letter-spacing', - 'pointer-events', 'shape-rendering', - 'stroke', 'stroke-dasharray', 'stroke-dashoffset', 'stroke-linecap', 'stroke-linejoin', - 'stroke-miterlimit', 'stroke-opacity', 'stroke-width', - 'text-anchor', 'text-decoration', 'text-rendering', 'visibility', - 'word-spacing', 'writing-mode']: - # Iterate through the children in reverse order, so item(i) for - # items we have yet to visit still returns the correct nodes. - curChild = elem.childNodes.length - 1 - while curChild >= 0: - childNode = elem.childNodes.item(curChild) - - if childNode.nodeType == 1 and childNode.getAttribute(curAttr) != '': - # We're in a possible run! Track the value and run length. - value = childNode.getAttribute(curAttr) - runStart, runEnd = curChild, curChild - # Run elements includes only element tags, no whitespace/comments/etc. - # Later, we calculate a run length which includes these. - runElements = 1 - - # Backtrack to get all the nodes having the same - # attribute value, preserving any nodes in-between. - while runStart > 0: - nextNode = elem.childNodes.item(runStart - 1) - if nextNode.nodeType == 1: - if nextNode.getAttribute(curAttr) != value: break - else: - runElements += 1 - runStart -= 1 - else: runStart -= 1 - - if runElements >= 3: - # Include whitespace/comment/etc. nodes in the run. - while runEnd < elem.childNodes.length - 1: - if elem.childNodes.item(runEnd + 1).nodeType == 1: break - else: runEnd += 1 - - runLength = runEnd - runStart + 1 - if runLength == elem.childNodes.length: # Every child has this - # If the current parent is a <g> already, - if elem.nodeName == 'g' and elem.namespaceURI == NS['SVG']: - # do not act altogether on this attribute; all the - # children have it in common. - # Let moveCommonAttributesToParentGroup do it. - curChild = -1 - continue - # otherwise, it might be an <svg> element, and - # even if all children have the same attribute value, - # it's going to be worth making the <g> since - # <svg> doesn't support attributes like 'stroke'. - # Fall through. - - # Create a <g> element from scratch. - # We need the Document for this. - document = elem.ownerDocument - group = document.createElementNS(NS['SVG'], 'g') - # Move the run of elements to the group. - # a) ADD the nodes to the new group. - group.childNodes[:] = elem.childNodes[runStart:runEnd + 1] - for child in group.childNodes: - child.parentNode = group - # b) REMOVE the nodes from the element. - elem.childNodes[runStart:runEnd + 1] = [] - # Include the group in elem's children. - elem.childNodes.insert(runStart, group) - group.parentNode = elem - num += 1 - curChild = runStart - 1 - numElemsRemoved -= 1 + num = 0 + i = elem.childNodes.length - 1 + while i >= 0: + currentNode = elem.childNodes.item(i) + if currentNode.nodeType != Node.ELEMENT_NODE or currentNode.nodeName != 'g' or \ + currentNode.namespaceURI != NS['SVG']: + i -= 1 + continue + attributes = {a.nodeName: a.nodeValue for a in currentNode.attributes.values()} + if not attributes: + i -= 1 + continue + runStart, runEnd = i, i + runElements = 1 + while runStart > 0: + nextNode = elem.childNodes.item(runStart - 1) + if nextNode.nodeType == Node.ELEMENT_NODE: + if nextNode.nodeName != 'g' or nextNode.namespaceURI != NS['SVG']: + break + nextAttributes = {a.nodeName: a.nodeValue for a in nextNode.attributes.values()} + if attributes != nextAttributes or not g_tag_is_mergeable(nextNode): + break + else: + runElements += 1 + runStart -= 1 else: - curChild -= 1 - else: - curChild -= 1 + runStart -= 1 - # each child gets the same treatment, recursively - for childNode in elem.childNodes: - if childNode.nodeType == 1: - num += createGroupsForCommonAttributes(childNode) + # Next loop will start from here + i = runStart - 1 + + if runElements < 2: + continue + + # Find the <g> entry that starts the run (we might have run + # past it into a text node or a comment node. + while True: + node = elem.childNodes.item(runStart) + if node.nodeType == Node.ELEMENT_NODE and node.nodeName == 'g' and node.namespaceURI == NS['SVG']: + break + runStart += 1 + primaryGroup = elem.childNodes.item(runStart) + runStart += 1 + nodes = elem.childNodes[runStart:runEnd+1] + for node in nodes: + if node.nodeType == Node.ELEMENT_NODE and node.nodeName == 'g' and node.namespaceURI == NS['SVG']: + # Merge + for child in node.childNodes[:]: + primaryGroup.appendChild(child) + elem.removeChild(node).unlink() + else: + primaryGroup.appendChild(node) + + # each child gets the same treatment, recursively + for childNode in elem.childNodes: + if childNode.nodeType == Node.ELEMENT_NODE: + num += mergeSiblingGroupsWithCommonAttributes(childNode) + + return num + + +def create_groups_for_common_attributes(elem, stats): + """ + Creates <g> elements to contain runs of 3 or more + consecutive child elements having at least one common attribute. + + Common attributes are not promoted to the <g> by this function. + This is handled by moveCommonAttributesToParentGroup. + + If all children have a common attribute, an extra <g> is not created. + + This function acts recursively on the given element. + """ + + # TODO perhaps all of the Presentation attributes in http://www.w3.org/TR/SVG/struct.html#GElement + # could be added here + # Cyn: These attributes are the same as in moveAttributesToParentGroup, and must always be + for curAttr in ['clip-rule', + 'display-align', + 'fill', 'fill-opacity', 'fill-rule', + 'font', 'font-family', 'font-size', 'font-size-adjust', 'font-stretch', + 'font-style', 'font-variant', 'font-weight', + 'letter-spacing', + 'pointer-events', 'shape-rendering', + 'stroke', 'stroke-dasharray', 'stroke-dashoffset', 'stroke-linecap', 'stroke-linejoin', + 'stroke-miterlimit', 'stroke-opacity', 'stroke-width', + 'text-anchor', 'text-decoration', 'text-rendering', 'visibility', + 'word-spacing', 'writing-mode']: + # Iterate through the children in reverse order, so item(i) for + # items we have yet to visit still returns the correct nodes. + curChild = elem.childNodes.length - 1 + while curChild >= 0: + childNode = elem.childNodes.item(curChild) + + if ( + childNode.nodeType == Node.ELEMENT_NODE and + childNode.getAttribute(curAttr) != '' and + childNode.nodeName in [ + # only attempt to group elements that the content model allows to be children of a <g> + + # SVG 1.1 (see https://www.w3.org/TR/SVG/struct.html#GElement) + 'animate', 'animateColor', 'animateMotion', 'animateTransform', 'set', # animation elements + 'desc', 'metadata', 'title', # descriptive elements + 'circle', 'ellipse', 'line', 'path', 'polygon', 'polyline', 'rect', # shape elements + 'defs', 'g', 'svg', 'symbol', 'use', # structural elements + 'linearGradient', 'radialGradient', # gradient elements + 'a', 'altGlyphDef', 'clipPath', 'color-profile', 'cursor', 'filter', + 'font', 'font-face', 'foreignObject', 'image', 'marker', 'mask', + 'pattern', 'script', 'style', 'switch', 'text', 'view', + + # SVG 1.2 (see https://www.w3.org/TR/SVGTiny12/elementTable.html) + 'animation', 'audio', 'discard', 'handler', 'listener', + 'prefetch', 'solidColor', 'textArea', 'video' + ] + ): + # We're in a possible run! Track the value and run length. + value = childNode.getAttribute(curAttr) + runStart, runEnd = curChild, curChild + # Run elements includes only element tags, no whitespace/comments/etc. + # Later, we calculate a run length which includes these. + runElements = 1 + + # Backtrack to get all the nodes having the same + # attribute value, preserving any nodes in-between. + while runStart > 0: + nextNode = elem.childNodes.item(runStart - 1) + if nextNode.nodeType == Node.ELEMENT_NODE: + if nextNode.getAttribute(curAttr) != value: + break + else: + runElements += 1 + runStart -= 1 + else: + runStart -= 1 + + if runElements >= 3: + # Include whitespace/comment/etc. nodes in the run. + while runEnd < elem.childNodes.length - 1: + if elem.childNodes.item(runEnd + 1).nodeType == Node.ELEMENT_NODE: + break + else: + runEnd += 1 + + runLength = runEnd - runStart + 1 + if runLength == elem.childNodes.length: # Every child has this + # If the current parent is a <g> already, + if elem.nodeName == 'g' and elem.namespaceURI == NS['SVG']: + # do not act altogether on this attribute; all the + # children have it in common. + # Let moveCommonAttributesToParentGroup do it. + curChild = -1 + continue + # otherwise, it might be an <svg> element, and + # even if all children have the same attribute value, + # it's going to be worth making the <g> since + # <svg> doesn't support attributes like 'stroke'. + # Fall through. + + # Create a <g> element from scratch. + # We need the Document for this. + document = elem.ownerDocument + group = document.createElementNS(NS['SVG'], 'g') + # Move the run of elements to the group. + # a) ADD the nodes to the new group. + group.childNodes[:] = elem.childNodes[runStart:runEnd + 1] + for child in group.childNodes: + child.parentNode = group + # b) REMOVE the nodes from the element. + elem.childNodes[runStart:runEnd + 1] = [] + # Include the group in elem's children. + elem.childNodes.insert(runStart, group) + group.parentNode = elem + curChild = runStart - 1 + stats.num_elements_removed -= 1 + else: + curChild -= 1 + else: + curChild -= 1 + + # each child gets the same treatment, recursively + for childNode in elem.childNodes: + if childNode.nodeType == Node.ELEMENT_NODE: + create_groups_for_common_attributes(childNode, stats) - return num def removeUnusedAttributesOnParent(elem): - """ - This recursively calls this function on all children of the element passed in, - then removes any unused attributes on this elem if none of the children inherit it - """ - num = 0 + """ + This recursively calls this function on all children of the element passed in, + then removes any unused attributes on this elem if none of the children inherit it + """ + num = 0 - childElements = [] - # recurse first into the children (depth-first) - for child in elem.childNodes: - if child.nodeType == 1: - childElements.append(child) - num += removeUnusedAttributesOnParent(child) + childElements = [] + # recurse first into the children (depth-first) + for child in elem.childNodes: + if child.nodeType == Node.ELEMENT_NODE: + childElements.append(child) + num += removeUnusedAttributesOnParent(child) - # only process the children if there are more than one element - if len(childElements) <= 1: return num + # only process the children if there are more than one element + if len(childElements) <= 1: + return num - # get all attribute values on this parent - attrList = elem.attributes - unusedAttrs = {} - for num in xrange(attrList.length): - attr = attrList.item(num) - if attr.nodeName in ['clip-rule', - 'display-align', - 'fill', 'fill-opacity', 'fill-rule', - 'font', 'font-family', 'font-size', 'font-size-adjust', 'font-stretch', - 'font-style', 'font-variant', 'font-weight', - 'letter-spacing', - 'pointer-events', 'shape-rendering', - 'stroke', 'stroke-dasharray', 'stroke-dashoffset', 'stroke-linecap', 'stroke-linejoin', - 'stroke-miterlimit', 'stroke-opacity', 'stroke-width', - 'text-anchor', 'text-decoration', 'text-rendering', 'visibility', - 'word-spacing', 'writing-mode']: - unusedAttrs[attr.nodeName] = attr.nodeValue + # get all attribute values on this parent + attrList = elem.attributes + unusedAttrs = {} + for index in range(attrList.length): + attr = attrList.item(index) + if attr.nodeName in ['clip-rule', + 'display-align', + 'fill', 'fill-opacity', 'fill-rule', + 'font', 'font-family', 'font-size', 'font-size-adjust', 'font-stretch', + 'font-style', 'font-variant', 'font-weight', + 'letter-spacing', + 'pointer-events', 'shape-rendering', + 'stroke', 'stroke-dasharray', 'stroke-dashoffset', 'stroke-linecap', 'stroke-linejoin', + 'stroke-miterlimit', 'stroke-opacity', 'stroke-width', + 'text-anchor', 'text-decoration', 'text-rendering', 'visibility', + 'word-spacing', 'writing-mode']: + unusedAttrs[attr.nodeName] = attr.nodeValue - # for each child, if at least one child inherits the parent's attribute, then remove - for childNum in xrange(len(childElements)): - child = childElements[childNum] - inheritedAttrs = [] - for name in unusedAttrs.keys(): - val = child.getAttribute(name) - if val == '' or val == None or val == 'inherit': - inheritedAttrs.append(name) - for a in inheritedAttrs: - del unusedAttrs[a] + # for each child, if at least one child inherits the parent's attribute, then remove + for child in childElements: + inheritedAttrs = [] + for name in unusedAttrs: + val = child.getAttribute(name) + if val == '' or val == 'inherit': + inheritedAttrs.append(name) + for a in inheritedAttrs: + del unusedAttrs[a] - # unusedAttrs now has all the parent attributes that are unused - for name in unusedAttrs.keys(): - elem.removeAttribute(name) - num += 1 + # unusedAttrs now has all the parent attributes that are unused + for name in unusedAttrs: + elem.removeAttribute(name) + num += 1 - return num + return num -def removeDuplicateGradientStops(doc): - global numElemsRemoved - num = 0 - for gradType in ['linearGradient', 'radialGradient']: - for grad in doc.getElementsByTagName(gradType): - stops = {} - stopsToRemove = [] - for stop in grad.getElementsByTagName('stop'): - # convert percentages into a floating point number - offsetU = SVGLength(stop.getAttribute('offset')) - if offsetU.units == Unit.PCT: - offset = offsetU.value / 100.0 - elif offsetU.units == Unit.NONE: - offset = offsetU.value - else: - offset = 0 - # set the stop offset value to the integer or floating point equivalent - if int(offset) == offset: stop.setAttribute('offset', str(int(offset))) - else: stop.setAttribute('offset', str(offset)) +def remove_duplicate_gradient_stops(doc, stats): + num = 0 - color = stop.getAttribute('stop-color') - opacity = stop.getAttribute('stop-opacity') - style = stop.getAttribute('style') - if stops.has_key(offset) : - oldStop = stops[offset] - if oldStop[0] == color and oldStop[1] == opacity and oldStop[2] == style: - stopsToRemove.append(stop) - stops[offset] = [color, opacity, style] + for gradType in ['linearGradient', 'radialGradient']: + for grad in doc.getElementsByTagName(gradType): + stops = {} + stopsToRemove = [] + for stop in grad.getElementsByTagName('stop'): + # convert percentages into a floating point number + offsetU = SVGLength(stop.getAttribute('offset')) + if offsetU.units == Unit.PCT: + offset = offsetU.value / 100.0 + elif offsetU.units == Unit.NONE: + offset = offsetU.value + else: + offset = 0 + # set the stop offset value to the integer or floating point equivalent + if int(offset) == offset: + stop.setAttribute('offset', str(int(offset))) + else: + stop.setAttribute('offset', str(offset)) - for stop in stopsToRemove: - stop.parentNode.removeChild(stop) - num += 1 - numElemsRemoved += 1 + color = stop.getAttribute('stop-color') + opacity = stop.getAttribute('stop-opacity') + style = stop.getAttribute('style') + if offset in stops: + oldStop = stops[offset] + if oldStop[0] == color and oldStop[1] == opacity and oldStop[2] == style: + stopsToRemove.append(stop) + stops[offset] = [color, opacity, style] - # linear gradients - return num + for stop in stopsToRemove: + stop.parentNode.removeChild(stop) + num += len(stopsToRemove) + stats.num_elements_removed += len(stopsToRemove) -def collapseSinglyReferencedGradients(doc): - global numElemsRemoved - num = 0 + return num - identifiedElements = findElementsWithId(doc.documentElement) - # make sure to reset the ref'ed ids for when we are running this in testscour - for rid,nodeCount in findReferencedElements(doc.documentElement).iteritems(): - count = nodeCount[0] - nodes = nodeCount[1] - # Make sure that there's actually a defining element for the current ID name. - # (Cyn: I've seen documents with #id references but no element with that ID!) - if count == 1 and rid in identifiedElements: - elem = identifiedElements[rid] - if elem != None and elem.nodeType == 1 and elem.nodeName in ['linearGradient', 'radialGradient'] \ - and elem.namespaceURI == NS['SVG']: - # found a gradient that is referenced by only 1 other element - refElem = nodes[0] - if refElem.nodeType == 1 and refElem.nodeName in ['linearGradient', 'radialGradient'] \ - and refElem.namespaceURI == NS['SVG']: - # elem is a gradient referenced by only one other gradient (refElem) +def collapse_singly_referenced_gradients(doc, stats): + num = 0 - # add the stops to the referencing gradient (this removes them from elem) - if len(refElem.getElementsByTagName('stop')) == 0: - stopsToAdd = elem.getElementsByTagName('stop') - for stop in stopsToAdd: - refElem.appendChild(stop) + identifiedElements = findElementsWithId(doc.documentElement) - # adopt the gradientUnits, spreadMethod, gradientTransform attributes if - # they are unspecified on refElem - for attr in ['gradientUnits','spreadMethod','gradientTransform']: - if refElem.getAttribute(attr) == '' and not elem.getAttribute(attr) == '': - refElem.setAttributeNS(None, attr, elem.getAttribute(attr)) + # make sure to reset the ref'ed ids for when we are running this in testscour + for rid, nodes in six.iteritems(findReferencedElements(doc.documentElement)): + # Make sure that there's actually a defining element for the current ID name. + # (Cyn: I've seen documents with #id references but no element with that ID!) + if len(nodes) == 1 and rid in identifiedElements: + elem = identifiedElements[rid] + if ( + elem is not None and + elem.nodeType == Node.ELEMENT_NODE and + elem.nodeName in ['linearGradient', 'radialGradient'] and + elem.namespaceURI == NS['SVG'] + ): + # found a gradient that is referenced by only 1 other element + refElem = nodes.pop() + if refElem.nodeType == Node.ELEMENT_NODE and refElem.nodeName in ['linearGradient', 'radialGradient'] \ + and refElem.namespaceURI == NS['SVG']: + # elem is a gradient referenced by only one other gradient (refElem) - # if both are radialGradients, adopt elem's fx,fy,cx,cy,r attributes if - # they are unspecified on refElem - if elem.nodeName == 'radialGradient' and refElem.nodeName == 'radialGradient': - for attr in ['fx','fy','cx','cy','r']: - if refElem.getAttribute(attr) == '' and not elem.getAttribute(attr) == '': - refElem.setAttributeNS(None, attr, elem.getAttribute(attr)) + # add the stops to the referencing gradient (this removes them from elem) + if len(refElem.getElementsByTagName('stop')) == 0: + stopsToAdd = elem.getElementsByTagName('stop') + for stop in stopsToAdd: + refElem.appendChild(stop) - # if both are linearGradients, adopt elem's x1,y1,x2,y2 attributes if - # they are unspecified on refElem - if elem.nodeName == 'linearGradient' and refElem.nodeName == 'linearGradient': - for attr in ['x1','y1','x2','y2']: - if refElem.getAttribute(attr) == '' and not elem.getAttribute(attr) == '': - refElem.setAttributeNS(None, attr, elem.getAttribute(attr)) + # adopt the gradientUnits, spreadMethod, gradientTransform attributes if + # they are unspecified on refElem + for attr in ['gradientUnits', 'spreadMethod', 'gradientTransform']: + if refElem.getAttribute(attr) == '' and not elem.getAttribute(attr) == '': + refElem.setAttributeNS(None, attr, elem.getAttribute(attr)) - # now remove the xlink:href from refElem - refElem.removeAttributeNS(NS['XLINK'], 'href') + # if both are radialGradients, adopt elem's fx,fy,cx,cy,r attributes if + # they are unspecified on refElem + if elem.nodeName == 'radialGradient' and refElem.nodeName == 'radialGradient': + for attr in ['fx', 'fy', 'cx', 'cy', 'r']: + if refElem.getAttribute(attr) == '' and not elem.getAttribute(attr) == '': + refElem.setAttributeNS(None, attr, elem.getAttribute(attr)) + + # if both are linearGradients, adopt elem's x1,y1,x2,y2 attributes if + # they are unspecified on refElem + if elem.nodeName == 'linearGradient' and refElem.nodeName == 'linearGradient': + for attr in ['x1', 'y1', 'x2', 'y2']: + if refElem.getAttribute(attr) == '' and not elem.getAttribute(attr) == '': + refElem.setAttributeNS(None, attr, elem.getAttribute(attr)) + + target_href = elem.getAttributeNS(NS['XLINK'], 'href') + if target_href: + # If the elem node had an xlink:href, then the + # refElem have to point to it as well to + # preserve the semantics of the image. + refElem.setAttributeNS(NS['XLINK'], 'href', target_href) + else: + # The elem node had no xlink:href reference, + # so we can simply remove the attribute. + refElem.removeAttributeNS(NS['XLINK'], 'href') + + # now delete elem + elem.parentNode.removeChild(elem) + stats.num_elements_removed += 1 + num += 1 + + return num + + +def computeGradientBucketKey(grad): + # Compute a key (hashable opaque value; here a string) from each + # gradient such that "key(grad1) == key(grad2)" is the same as + # saying that grad1 is a duplicate of grad2. + gradBucketAttr = ['gradientUnits', 'spreadMethod', 'gradientTransform', + 'x1', 'y1', 'x2', 'y2', 'cx', 'cy', 'fx', 'fy', 'r'] + gradStopBucketsAttr = ['offset', 'stop-color', 'stop-opacity', 'style'] + + # A linearGradient can never be a duplicate of a + # radialGradient (and vice versa) + subKeys = [grad.getAttribute(a) for a in gradBucketAttr] + subKeys.append(grad.getAttributeNS(NS['XLINK'], 'href')) + stops = grad.getElementsByTagName('stop') + if stops.length: + for i in range(stops.length): + stop = stops.item(i) + for attr in gradStopBucketsAttr: + stopKey = stop.getAttribute(attr) + subKeys.append(stopKey) + + # Use a raw ASCII "record separator" control character as it is + # not likely to be used in any of these values (without having to + # be escaped). + return "\x1e".join(subKeys) + + +def detect_duplicate_gradients(*grad_lists): + """Detects duplicate gradients from each iterable/generator given as argument + + Yields (master, master_id, duplicates_id, duplicates) tuples where: + * master_id: The ID attribute of the master element. This will always be non-empty + and not None as long at least one of the gradients have a valid ID. + * duplicates_id: List of ID attributes of the duplicate gradients elements (can be + empty where the gradient had no ID attribute) + * duplicates: List of elements that are duplicates of the `master` element. Will + never include the `master` element. Has the same order as `duplicates_id` - i.e. + `duplicates[X].getAttribute("id") == duplicates_id[X]`. + """ + for grads in grad_lists: + grad_buckets = defaultdict(list) + + for grad in grads: + key = computeGradientBucketKey(grad) + grad_buckets[key].append(grad) + + for bucket in six.itervalues(grad_buckets): + if len(bucket) < 2: + # The gradient must be unique if it is the only one in + # this bucket. + continue + master = bucket[0] + duplicates = bucket[1:] + duplicates_ids = [d.getAttribute('id') for d in duplicates] + master_id = master.getAttribute('id') + if not master_id: + # If our selected "master" copy does not have an ID, + # then replace it with one that does (assuming any of + # them has one). This avoids broken images like we + # saw in GH#203 + for i in range(len(duplicates_ids)): + dup_id = duplicates_ids[i] + if dup_id: + # We do not bother updating the master field + # as it is not used any more. + master_id = duplicates_ids[i] + duplicates[i] = master + # Clear the old id to avoid a redundant remapping + duplicates_ids[i] = "" + break + + yield master_id, duplicates_ids, duplicates + + +def dedup_gradient(master_id, duplicates_ids, duplicates, referenced_ids): + func_iri = None + for dup_id, dup_grad in zip(duplicates_ids, duplicates): + # if the duplicate gradient no longer has a parent that means it was + # already re-mapped to another master gradient + if not dup_grad.parentNode: + continue + + # With --keep-unreferenced-defs, we can end up with + # unreferenced gradients. See GH#156. + if dup_id in referenced_ids: + if func_iri is None: + # matches url(#<ANY_DUP_ID>), url('#<ANY_DUP_ID>') and url("#<ANY_DUP_ID>") + dup_id_regex = "|".join(duplicates_ids) + func_iri = re.compile('url\\([\'"]?#(?:' + dup_id_regex + ')[\'"]?\\)') + for elem in referenced_ids[dup_id]: + # find out which attribute referenced the duplicate gradient + for attr in ['fill', 'stroke']: + v = elem.getAttribute(attr) + (v_new, n) = func_iri.subn('url(#' + master_id + ')', v) + if n > 0: + elem.setAttribute(attr, v_new) + if elem.getAttributeNS(NS['XLINK'], 'href') == '#' + dup_id: + elem.setAttributeNS(NS['XLINK'], 'href', '#' + master_id) + styles = _getStyle(elem) + for style in styles: + v = styles[style] + (v_new, n) = func_iri.subn('url(#' + master_id + ')', v) + if n > 0: + styles[style] = v_new + _setStyle(elem, styles) + + # now that all referencing elements have been re-mapped to the master + # it is safe to remove this gradient from the document + dup_grad.parentNode.removeChild(dup_grad) + + # If the gradients have an ID, we update referenced_ids to match the newly remapped IDs. + # This enable us to avoid calling findReferencedElements once per loop, which is helpful as it is + # one of the slowest functions in scour. + if master_id: + try: + master_references = referenced_ids[master_id] + except KeyError: + master_references = set() + + for dup_id in duplicates_ids: + references = referenced_ids.pop(dup_id, None) + if references is None: + continue + master_references.update(references) + + # Only necessary but needed if the master gradient did + # not have any references originally + referenced_ids[master_id] = master_references - # now delete elem - elem.parentNode.removeChild(elem) - numElemsRemoved += 1 - num += 1 - return num def removeDuplicateGradients(doc): - global numElemsRemoved - num = 0 + prev_num = -1 + num = 0 - gradientsToRemove = {} - duplicateToMaster = {} + # get a collection of all elements that are referenced and their referencing elements + referenced_ids = findReferencedElements(doc.documentElement) - for gradType in ['linearGradient', 'radialGradient']: - grads = doc.getElementsByTagName(gradType) - for grad in grads: - # TODO: should slice grads from 'grad' here to optimize - for ograd in grads: - # do not compare gradient to itself - if grad == ograd: continue + while prev_num != num: + prev_num = num - # compare grad to ograd (all properties, then all stops) - # if attributes do not match, go to next gradient - someGradAttrsDoNotMatch = False - for attr in ['gradientUnits','spreadMethod','gradientTransform','x1','y1','x2','y2','cx','cy','fx','fy','r']: - if grad.getAttribute(attr) != ograd.getAttribute(attr): - someGradAttrsDoNotMatch = True - break; + linear_gradients = doc.getElementsByTagName('linearGradient') + radial_gradients = doc.getElementsByTagName('radialGradient') - if someGradAttrsDoNotMatch: continue + for master_id, duplicates_ids, duplicates in detect_duplicate_gradients(linear_gradients, radial_gradients): + dedup_gradient(master_id, duplicates_ids, duplicates, referenced_ids) + num += len(duplicates) - # compare xlink:href values too - if grad.getAttributeNS(NS['XLINK'], 'href') != ograd.getAttributeNS(NS['XLINK'], 'href'): - continue + return num - # all gradient properties match, now time to compare stops - stops = grad.getElementsByTagName('stop') - ostops = ograd.getElementsByTagName('stop') - - if stops.length != ostops.length: continue - - # now compare stops - stopsNotEqual = False - for i in xrange(stops.length): - if stopsNotEqual: break - stop = stops.item(i) - ostop = ostops.item(i) - for attr in ['offset', 'stop-color', 'stop-opacity', 'style']: - if stop.getAttribute(attr) != ostop.getAttribute(attr): - stopsNotEqual = True - break - if stopsNotEqual: continue - - # ograd is a duplicate of grad, we schedule it to be removed UNLESS - # ograd is ALREADY considered a 'master' element - if not gradientsToRemove.has_key(ograd): - if not duplicateToMaster.has_key(ograd): - if not gradientsToRemove.has_key(grad): - gradientsToRemove[grad] = [] - gradientsToRemove[grad].append( ograd ) - duplicateToMaster[ograd] = grad - - # get a collection of all elements that are referenced and their referencing elements - referencedIDs = findReferencedElements(doc.documentElement) - for masterGrad in gradientsToRemove.keys(): - master_id = masterGrad.getAttribute('id') -# print 'master='+master_id - for dupGrad in gradientsToRemove[masterGrad]: - # if the duplicate gradient no longer has a parent that means it was - # already re-mapped to another master gradient - if not dupGrad.parentNode: continue - dup_id = dupGrad.getAttribute('id') -# print 'dup='+dup_id -# print referencedIDs[dup_id] - # for each element that referenced the gradient we are going to remove - for elem in referencedIDs[dup_id][1]: - # find out which attribute referenced the duplicate gradient - for attr in ['fill', 'stroke']: - v = elem.getAttribute(attr) - if v == 'url(#'+dup_id+')' or v == 'url("#'+dup_id+'")' or v == "url('#"+dup_id+"')": - elem.setAttribute(attr, 'url(#'+master_id+')') - if elem.getAttributeNS(NS['XLINK'], 'href') == '#'+dup_id: - elem.setAttributeNS(NS['XLINK'], 'href', '#'+master_id) - styles = _getStyle(elem) - for style in styles: - v = styles[style] - if v == 'url(#'+dup_id+')' or v == 'url("#'+dup_id+'")' or v == "url('#"+dup_id+"')": - styles[style] = 'url(#'+master_id+')' - _setStyle(elem, styles) - - # now that all referencing elements have been re-mapped to the master - # it is safe to remove this gradient from the document - dupGrad.parentNode.removeChild(dupGrad) - numElemsRemoved += 1 - num += 1 - return num def _getStyle(node): - u"""Returns the style attribute of a node as a dictionary.""" - if node.nodeType == 1 and len(node.getAttribute('style')) > 0 : - styleMap = { } - rawStyles = node.getAttribute('style').split(';') - for style in rawStyles: - propval = style.split(':') - if len(propval) == 2 : - styleMap[propval[0].strip()] = propval[1].strip() - return styleMap - else: - return {} + u"""Returns the style attribute of a node as a dictionary.""" + if node.nodeType != Node.ELEMENT_NODE: + return {} + style_attribute = node.getAttribute('style') + if style_attribute: + styleMap = {} + rawStyles = style_attribute.split(';') + for style in rawStyles: + propval = style.split(':') + if len(propval) == 2: + styleMap[propval[0].strip()] = propval[1].strip() + return styleMap + else: + return {} + def _setStyle(node, styleMap): - u"""Sets the style attribute of a node to the dictionary ``styleMap``.""" - fixedStyle = ';'.join([prop + ':' + styleMap[prop] for prop in styleMap.keys()]) - if fixedStyle != '' : - node.setAttribute('style', fixedStyle) - elif node.getAttribute('style'): - node.removeAttribute('style') - return node + u"""Sets the style attribute of a node to the dictionary ``styleMap``.""" + fixedStyle = ';'.join(prop + ':' + styleMap[prop] for prop in styleMap) + if fixedStyle != '': + node.setAttribute('style', fixedStyle) + elif node.getAttribute('style'): + node.removeAttribute('style') + return node + def repairStyle(node, options): - num = 0 - styleMap = _getStyle(node) - if styleMap: + num = 0 + styleMap = _getStyle(node) + if styleMap: - # I've seen this enough to know that I need to correct it: - # fill: url(#linearGradient4918) rgb(0, 0, 0); - for prop in ['fill', 'stroke'] : - if styleMap.has_key(prop) : - chunk = styleMap[prop].split(') ') - if len(chunk) == 2 and (chunk[0][:5] == 'url(#' or chunk[0][:6] == 'url("#' or chunk[0][:6] == "url('#") and chunk[1] == 'rgb(0, 0, 0)' : - styleMap[prop] = chunk[0] + ')' - num += 1 + # I've seen this enough to know that I need to correct it: + # fill: url(#linearGradient4918) rgb(0, 0, 0); + for prop in ['fill', 'stroke']: + if prop in styleMap: + chunk = styleMap[prop].split(') ') + if (len(chunk) == 2 + and (chunk[0][:5] == 'url(#' or chunk[0][:6] == 'url("#' or chunk[0][:6] == "url('#") + and chunk[1] == 'rgb(0, 0, 0)'): + styleMap[prop] = chunk[0] + ')' + num += 1 - # Here is where we can weed out unnecessary styles like: - # opacity:1 - if styleMap.has_key('opacity') : - opacity = float(styleMap['opacity']) - # if opacity='0' then all fill and stroke properties are useless, remove them - if opacity == 0.0 : - for uselessStyle in ['fill', 'fill-opacity', 'fill-rule', 'stroke', 'stroke-linejoin', - 'stroke-opacity', 'stroke-miterlimit', 'stroke-linecap', 'stroke-dasharray', - 'stroke-dashoffset', 'stroke-opacity'] : - if styleMap.has_key(uselessStyle): - del styleMap[uselessStyle] - num += 1 + # Here is where we can weed out unnecessary styles like: + # opacity:1 + if 'opacity' in styleMap: + opacity = float(styleMap['opacity']) + # if opacity='0' then all fill and stroke properties are useless, remove them + if opacity == 0.0: + for uselessStyle in ['fill', 'fill-opacity', 'fill-rule', 'stroke', 'stroke-linejoin', + 'stroke-opacity', 'stroke-miterlimit', 'stroke-linecap', 'stroke-dasharray', + 'stroke-dashoffset', 'stroke-opacity']: + if uselessStyle in styleMap and not styleInheritedByChild(node, uselessStyle): + del styleMap[uselessStyle] + num += 1 - # if stroke:none, then remove all stroke-related properties (stroke-width, etc) - # TODO: should also detect if the computed value of this element is stroke="none" - if styleMap.has_key('stroke') and styleMap['stroke'] == 'none' : - for strokestyle in [ 'stroke-width', 'stroke-linejoin', 'stroke-miterlimit', - 'stroke-linecap', 'stroke-dasharray', 'stroke-dashoffset', 'stroke-opacity'] : - if styleMap.has_key(strokestyle) : - del styleMap[strokestyle] - num += 1 - # TODO: This is actually a problem if a parent element has a specified stroke - # we need to properly calculate computed values - del styleMap['stroke'] + # if stroke:none, then remove all stroke-related properties (stroke-width, etc) + # TODO: should also detect if the computed value of this element is stroke="none" + if 'stroke' in styleMap and styleMap['stroke'] == 'none': + for strokestyle in ['stroke-width', 'stroke-linejoin', 'stroke-miterlimit', + 'stroke-linecap', 'stroke-dasharray', 'stroke-dashoffset', 'stroke-opacity']: + if strokestyle in styleMap and not styleInheritedByChild(node, strokestyle): + del styleMap[strokestyle] + num += 1 + # we need to properly calculate computed values + if not styleInheritedByChild(node, 'stroke'): + if styleInheritedFromParent(node, 'stroke') in [None, 'none']: + del styleMap['stroke'] + num += 1 - # if fill:none, then remove all fill-related properties (fill-rule, etc) - if styleMap.has_key('fill') and styleMap['fill'] == 'none' : - for fillstyle in [ 'fill-rule', 'fill-opacity' ] : - if styleMap.has_key(fillstyle) : - del styleMap[fillstyle] - num += 1 + # if fill:none, then remove all fill-related properties (fill-rule, etc) + if 'fill' in styleMap and styleMap['fill'] == 'none': + for fillstyle in ['fill-rule', 'fill-opacity']: + if fillstyle in styleMap and not styleInheritedByChild(node, fillstyle): + del styleMap[fillstyle] + num += 1 - # fill-opacity: 0 - if styleMap.has_key('fill-opacity') : - fillOpacity = float(styleMap['fill-opacity']) - if fillOpacity == 0.0 : - for uselessFillStyle in [ 'fill', 'fill-rule' ] : - if styleMap.has_key(uselessFillStyle): - del styleMap[uselessFillStyle] - num += 1 + # fill-opacity: 0 + if 'fill-opacity' in styleMap: + fillOpacity = float(styleMap['fill-opacity']) + if fillOpacity == 0.0: + for uselessFillStyle in ['fill', 'fill-rule']: + if uselessFillStyle in styleMap and not styleInheritedByChild(node, uselessFillStyle): + del styleMap[uselessFillStyle] + num += 1 - # stroke-opacity: 0 - if styleMap.has_key('stroke-opacity') : - strokeOpacity = float(styleMap['stroke-opacity']) - if strokeOpacity == 0.0 : - for uselessStrokeStyle in [ 'stroke', 'stroke-width', 'stroke-linejoin', 'stroke-linecap', - 'stroke-dasharray', 'stroke-dashoffset' ] : - if styleMap.has_key(uselessStrokeStyle): - del styleMap[uselessStrokeStyle] - num += 1 + # stroke-opacity: 0 + if 'stroke-opacity' in styleMap: + strokeOpacity = float(styleMap['stroke-opacity']) + if strokeOpacity == 0.0: + for uselessStrokeStyle in ['stroke', 'stroke-width', 'stroke-linejoin', 'stroke-linecap', + 'stroke-dasharray', 'stroke-dashoffset']: + if uselessStrokeStyle in styleMap and not styleInheritedByChild(node, uselessStrokeStyle): + del styleMap[uselessStrokeStyle] + num += 1 - # stroke-width: 0 - if styleMap.has_key('stroke-width') : - strokeWidth = SVGLength(styleMap['stroke-width']) - if strokeWidth.value == 0.0 : - for uselessStrokeStyle in [ 'stroke', 'stroke-linejoin', 'stroke-linecap', - 'stroke-dasharray', 'stroke-dashoffset', 'stroke-opacity' ] : - if styleMap.has_key(uselessStrokeStyle): - del styleMap[uselessStrokeStyle] - num += 1 + # stroke-width: 0 + if 'stroke-width' in styleMap: + strokeWidth = SVGLength(styleMap['stroke-width']) + if strokeWidth.value == 0.0: + for uselessStrokeStyle in ['stroke', 'stroke-linejoin', 'stroke-linecap', + 'stroke-dasharray', 'stroke-dashoffset', 'stroke-opacity']: + if uselessStrokeStyle in styleMap and not styleInheritedByChild(node, uselessStrokeStyle): + del styleMap[uselessStrokeStyle] + num += 1 - # remove font properties for non-text elements - # I've actually observed this in real SVG content - if not mayContainTextNodes(node): - for fontstyle in [ 'font-family', 'font-size', 'font-stretch', 'font-size-adjust', - 'font-style', 'font-variant', 'font-weight', - 'letter-spacing', 'line-height', 'kerning', - 'text-align', 'text-anchor', 'text-decoration', - 'text-rendering', 'unicode-bidi', - 'word-spacing', 'writing-mode'] : - if styleMap.has_key(fontstyle) : - del styleMap[fontstyle] - num += 1 + # remove font properties for non-text elements + # I've actually observed this in real SVG content + if not mayContainTextNodes(node): + for fontstyle in ['font-family', 'font-size', 'font-stretch', 'font-size-adjust', + 'font-style', 'font-variant', 'font-weight', + 'letter-spacing', 'line-height', 'kerning', + 'text-align', 'text-anchor', 'text-decoration', + 'text-rendering', 'unicode-bidi', + 'word-spacing', 'writing-mode']: + if fontstyle in styleMap: + del styleMap[fontstyle] + num += 1 - # remove inkscape-specific styles - # TODO: need to get a full list of these - for inkscapeStyle in ['-inkscape-font-specification']: - if styleMap.has_key(inkscapeStyle): - del styleMap[inkscapeStyle] - num += 1 + # remove inkscape-specific styles + # TODO: need to get a full list of these + for inkscapeStyle in ['-inkscape-font-specification']: + if inkscapeStyle in styleMap: + del styleMap[inkscapeStyle] + num += 1 - if styleMap.has_key('overflow') : - # overflow specified on element other than svg, marker, pattern - if not node.nodeName in ['svg','marker','pattern']: - del styleMap['overflow'] - num += 1 - # it is a marker, pattern or svg - # as long as this node is not the document <svg>, then only - # remove overflow='hidden'. See - # http://www.w3.org/TR/2010/WD-SVG11-20100622/masking.html#OverflowProperty - elif node != node.ownerDocument.documentElement: - if styleMap['overflow'] == 'hidden': - del styleMap['overflow'] - num += 1 - # else if outer svg has a overflow="visible", we can remove it - elif styleMap['overflow'] == 'visible': - del styleMap['overflow'] - num += 1 + if 'overflow' in styleMap: + # remove overflow from elements to which it does not apply, + # see https://www.w3.org/TR/SVG/masking.html#OverflowProperty + if node.nodeName not in ['svg', 'symbol', 'image', 'foreignObject', 'marker', 'pattern']: + del styleMap['overflow'] + num += 1 + # if the node is not the root <svg> element the SVG's user agent style sheet + # overrides the initial (i.e. default) value with the value 'hidden', which can consequently be removed + # (see last bullet point in the link above) + elif node != node.ownerDocument.documentElement: + if styleMap['overflow'] == 'hidden': + del styleMap['overflow'] + num += 1 + # on the root <svg> element the CSS2 default overflow="visible" is the initial value and we can remove it + elif styleMap['overflow'] == 'visible': + del styleMap['overflow'] + num += 1 - # now if any of the properties match known SVG attributes we prefer attributes - # over style so emit them and remove them from the style map - if options.style_to_xml: - for propName in styleMap.keys() : - if propName in svgAttributes : - node.setAttribute(propName, styleMap[propName]) - del styleMap[propName] + # now if any of the properties match known SVG attributes we prefer attributes + # over style so emit them and remove them from the style map + if options.style_to_xml: + for propName in list(styleMap): + if propName in svgAttributes: + node.setAttribute(propName, styleMap[propName]) + del styleMap[propName] - _setStyle(node, styleMap) + _setStyle(node, styleMap) - # recurse for our child elements - for child in node.childNodes : - num += repairStyle(child,options) + # recurse for our child elements + for child in node.childNodes: + num += repairStyle(child, options) + + return num + + +def styleInheritedFromParent(node, style): + """ + Returns the value of 'style' that is inherited from the parents of the passed-in node + + Warning: This method only considers presentation attributes and inline styles, + any style sheets are ignored! + """ + parentNode = node.parentNode + + # return None if we reached the Document element + if parentNode.nodeType == Node.DOCUMENT_NODE: + return None + + # check styles first (they take precedence over presentation attributes) + styles = _getStyle(parentNode) + if style in styles: + value = styles[style] + if not value == 'inherit': + return value + + # check attributes + value = parentNode.getAttribute(style) + if value not in ['', 'inherit']: + return parentNode.getAttribute(style) + + # check the next parent recursively if we did not find a value yet + return styleInheritedFromParent(parentNode, style) + + +def styleInheritedByChild(node, style, nodeIsChild=False): + """ + Returns whether 'style' is inherited by any children of the passed-in node + + If False is returned, it is guaranteed that 'style' can safely be removed + from the passed-in node without influencing visual output of it's children + + If True is returned, the passed-in node should not have its text-based + attributes removed. + + Warning: This method only considers presentation attributes and inline styles, + any style sheets are ignored! + """ + # Comment, text and CDATA nodes don't have attributes and aren't containers so they can't inherit attributes + if node.nodeType != Node.ELEMENT_NODE: + return False + + if nodeIsChild: + # if the current child node sets a new value for 'style' + # we can stop the search in the current branch of the DOM tree + + # check attributes + if node.getAttribute(style) not in ['', 'inherit']: + return False + # check styles + styles = _getStyle(node) + if (style in styles) and not (styles[style] == 'inherit'): + return False + else: + # if the passed-in node does not have any children 'style' can obviously not be inherited + if not node.childNodes: + return False + + # If we have child nodes recursively check those + if node.childNodes: + for child in node.childNodes: + if styleInheritedByChild(child, style, True): + return True + + # If the current element is a container element the inherited style is meaningless + # (since we made sure it's not inherited by any of its children) + if node.nodeName in ['a', 'defs', 'glyph', 'g', 'marker', 'mask', + 'missing-glyph', 'pattern', 'svg', 'switch', 'symbol']: + return False + + # in all other cases we have to assume the inherited value of 'style' is meaningful and has to be kept + # (e.g nodes without children at the end of the DOM tree, text nodes, ...) + return True - return num def mayContainTextNodes(node): - """ - Returns True if the passed-in node is probably a text element, or at least - one of its descendants is probably a text element. + """ + Returns True if the passed-in node is probably a text element, or at least + one of its descendants is probably a text element. - If False is returned, it is guaranteed that the passed-in node has no - business having text-based attributes. + If False is returned, it is guaranteed that the passed-in node has no + business having text-based attributes. - If True is returned, the passed-in node should not have its text-based - attributes removed. - """ - # Cached result of a prior call? - try: - return node.mayContainTextNodes - except AttributeError: - pass + If True is returned, the passed-in node should not have its text-based + attributes removed. + """ + # Cached result of a prior call? + try: + return node.mayContainTextNodes + except AttributeError: + pass - result = True # Default value - # Comment, text and CDATA nodes don't have attributes and aren't containers - if node.nodeType != 1: - result = False - # Non-SVG elements? Unknown elements! - elif node.namespaceURI != NS['SVG']: - result = True - # Blacklisted elements. Those are guaranteed not to be text elements. - elif node.nodeName in ['rect', 'circle', 'ellipse', 'line', 'polygon', - 'polyline', 'path', 'image', 'stop']: - result = False - # Group elements. If we're missing any here, the default of True is used. - elif node.nodeName in ['g', 'clipPath', 'marker', 'mask', 'pattern', - 'linearGradient', 'radialGradient', 'symbol']: - result = False - for child in node.childNodes: - if mayContainTextNodes(child): - result = True - # Everything else should be considered a future SVG-version text element - # at best, or an unknown element at worst. result will stay True. + result = True # Default value + # Comment, text and CDATA nodes don't have attributes and aren't containers + if node.nodeType != Node.ELEMENT_NODE: + result = False + # Non-SVG elements? Unknown elements! + elif node.namespaceURI != NS['SVG']: + result = True + # Blacklisted elements. Those are guaranteed not to be text elements. + elif node.nodeName in ['rect', 'circle', 'ellipse', 'line', 'polygon', + 'polyline', 'path', 'image', 'stop']: + result = False + # Group elements. If we're missing any here, the default of True is used. + elif node.nodeName in ['g', 'clipPath', 'marker', 'mask', 'pattern', + 'linearGradient', 'radialGradient', 'symbol']: + result = False + for child in node.childNodes: + if mayContainTextNodes(child): + result = True + # Everything else should be considered a future SVG-version text element + # at best, or an unknown element at worst. result will stay True. + + # Cache this result before returning it. + node.mayContainTextNodes = result + return result + + +# A list of default attributes that are safe to remove if all conditions are fulfilled +# +# Each default attribute is an object of type 'DefaultAttribute' with the following fields: +# name - name of the attribute to be matched +# value - default value of the attribute +# units - the unit(s) for which 'value' is valid (see 'Unit' class for possible specifications) +# elements - name(s) of SVG element(s) for which the attribute specification is valid +# conditions - additional conditions that have to be fulfilled for removal of the specified default attribute +# implemented as lambda functions with one argument (an xml.dom.minidom node) +# evaluating to either True or False +# When not specifying a field value, it will be ignored (i.e. always matches) +# +# Sources for this list: +# https://www.w3.org/TR/SVG/attindex.html (mostly implemented) +# https://www.w3.org/TR/SVGTiny12/attributeTable.html (not yet implemented) +# https://www.w3.org/TR/SVG2/attindex.html (not yet implemented) +# +DefaultAttribute = namedtuple('DefaultAttribute', ['name', 'value', 'units', 'elements', 'conditions']) +DefaultAttribute.__new__.__defaults__ = (None,) * len(DefaultAttribute._fields) +default_attributes = [ + # unit systems + DefaultAttribute('clipPathUnits', 'userSpaceOnUse', elements=['clipPath']), + DefaultAttribute('filterUnits', 'objectBoundingBox', elements=['filter']), + DefaultAttribute('gradientUnits', 'objectBoundingBox', elements=['linearGradient', 'radialGradient']), + DefaultAttribute('maskUnits', 'objectBoundingBox', elements=['mask']), + DefaultAttribute('maskContentUnits', 'userSpaceOnUse', elements=['mask']), + DefaultAttribute('patternUnits', 'objectBoundingBox', elements=['pattern']), + DefaultAttribute('patternContentUnits', 'userSpaceOnUse', elements=['pattern']), + DefaultAttribute('primitiveUnits', 'userSpaceOnUse', elements=['filter']), + + DefaultAttribute('externalResourcesRequired', 'false', + elements=['a', 'altGlyph', 'animate', 'animateColor', + 'animateMotion', 'animateTransform', 'circle', 'clipPath', 'cursor', 'defs', 'ellipse', + 'feImage', 'filter', 'font', 'foreignObject', 'g', 'image', 'line', 'linearGradient', + 'marker', 'mask', 'mpath', 'path', 'pattern', 'polygon', 'polyline', 'radialGradient', + 'rect', 'script', 'set', 'svg', 'switch', 'symbol', 'text', 'textPath', 'tref', 'tspan', + 'use', 'view']), + + # svg elements + DefaultAttribute('width', 100, Unit.PCT, elements=['svg']), + DefaultAttribute('height', 100, Unit.PCT, elements=['svg']), + DefaultAttribute('baseProfile', 'none', elements=['svg']), + DefaultAttribute('preserveAspectRatio', 'xMidYMid meet', + elements=['feImage', 'image', 'marker', 'pattern', 'svg', 'symbol', 'view']), + + # common attributes / basic types + DefaultAttribute('x', 0, elements=['cursor', 'fePointLight', 'feSpotLight', 'foreignObject', + 'image', 'pattern', 'rect', 'svg', 'text', 'use']), + DefaultAttribute('y', 0, elements=['cursor', 'fePointLight', 'feSpotLight', 'foreignObject', + 'image', 'pattern', 'rect', 'svg', 'text', 'use']), + DefaultAttribute('z', 0, elements=['fePointLight', 'feSpotLight']), + DefaultAttribute('x1', 0, elements=['line']), + DefaultAttribute('y1', 0, elements=['line']), + DefaultAttribute('x2', 0, elements=['line']), + DefaultAttribute('y2', 0, elements=['line']), + DefaultAttribute('cx', 0, elements=['circle', 'ellipse']), + DefaultAttribute('cy', 0, elements=['circle', 'ellipse']), + + # markers + DefaultAttribute('markerUnits', 'strokeWidth', elements=['marker']), + DefaultAttribute('refX', 0, elements=['marker']), + DefaultAttribute('refY', 0, elements=['marker']), + DefaultAttribute('markerHeight', 3, elements=['marker']), + DefaultAttribute('markerWidth', 3, elements=['marker']), + DefaultAttribute('orient', 0, elements=['marker']), + + # text / textPath / tspan / tref + DefaultAttribute('lengthAdjust', 'spacing', elements=['text', 'textPath', 'tref', 'tspan']), + DefaultAttribute('startOffset', 0, elements=['textPath']), + DefaultAttribute('method', 'align', elements=['textPath']), + DefaultAttribute('spacing', 'exact', elements=['textPath']), + + # filters and masks + DefaultAttribute('x', -10, Unit.PCT, ['filter', 'mask']), + DefaultAttribute('x', -0.1, Unit.NONE, ['filter', 'mask'], + conditions=lambda node: node.getAttribute('gradientUnits') != 'userSpaceOnUse'), + DefaultAttribute('y', -10, Unit.PCT, ['filter', 'mask']), + DefaultAttribute('y', -0.1, Unit.NONE, ['filter', 'mask'], + conditions=lambda node: node.getAttribute('gradientUnits') != 'userSpaceOnUse'), + DefaultAttribute('width', 120, Unit.PCT, ['filter', 'mask']), + DefaultAttribute('width', 1.2, Unit.NONE, ['filter', 'mask'], + conditions=lambda node: node.getAttribute('gradientUnits') != 'userSpaceOnUse'), + DefaultAttribute('height', 120, Unit.PCT, ['filter', 'mask']), + DefaultAttribute('height', 1.2, Unit.NONE, ['filter', 'mask'], + conditions=lambda node: node.getAttribute('gradientUnits') != 'userSpaceOnUse'), + + # gradients + DefaultAttribute('x1', 0, elements=['linearGradient']), + DefaultAttribute('y1', 0, elements=['linearGradient']), + DefaultAttribute('y2', 0, elements=['linearGradient']), + DefaultAttribute('x2', 100, Unit.PCT, elements=['linearGradient']), + DefaultAttribute('x2', 1, Unit.NONE, elements=['linearGradient'], + conditions=lambda node: node.getAttribute('gradientUnits') != 'userSpaceOnUse'), + # remove fx/fy before cx/cy to catch the case where fx = cx = 50% or fy = cy = 50% respectively + DefaultAttribute('fx', elements=['radialGradient'], + conditions=lambda node: node.getAttribute('fx') == node.getAttribute('cx')), + DefaultAttribute('fy', elements=['radialGradient'], + conditions=lambda node: node.getAttribute('fy') == node.getAttribute('cy')), + DefaultAttribute('r', 50, Unit.PCT, elements=['radialGradient']), + DefaultAttribute('r', 0.5, Unit.NONE, elements=['radialGradient'], + conditions=lambda node: node.getAttribute('gradientUnits') != 'userSpaceOnUse'), + DefaultAttribute('cx', 50, Unit.PCT, elements=['radialGradient']), + DefaultAttribute('cx', 0.5, Unit.NONE, elements=['radialGradient'], + conditions=lambda node: node.getAttribute('gradientUnits') != 'userSpaceOnUse'), + DefaultAttribute('cy', 50, Unit.PCT, elements=['radialGradient']), + DefaultAttribute('cy', 0.5, Unit.NONE, elements=['radialGradient'], + conditions=lambda node: node.getAttribute('gradientUnits') != 'userSpaceOnUse'), + DefaultAttribute('spreadMethod', 'pad', elements=['linearGradient', 'radialGradient']), + + # filter effects + # TODO: Some numerical attributes allow an optional second value ("number-optional-number") + # and are currently handled as strings to avoid an exception in 'SVGLength', see + # https://github.com/scour-project/scour/pull/192 + DefaultAttribute('amplitude', 1, elements=['feFuncA', 'feFuncB', 'feFuncG', 'feFuncR']), + DefaultAttribute('azimuth', 0, elements=['feDistantLight']), + DefaultAttribute('baseFrequency', '0', elements=['feFuncA', 'feFuncB', 'feFuncG', 'feFuncR']), + DefaultAttribute('bias', 1, elements=['feConvolveMatrix']), + DefaultAttribute('diffuseConstant', 1, elements=['feDiffuseLighting']), + DefaultAttribute('edgeMode', 'duplicate', elements=['feConvolveMatrix']), + DefaultAttribute('elevation', 0, elements=['feDistantLight']), + DefaultAttribute('exponent', 1, elements=['feFuncA', 'feFuncB', 'feFuncG', 'feFuncR']), + DefaultAttribute('intercept', 0, elements=['feFuncA', 'feFuncB', 'feFuncG', 'feFuncR']), + DefaultAttribute('k1', 0, elements=['feComposite']), + DefaultAttribute('k2', 0, elements=['feComposite']), + DefaultAttribute('k3', 0, elements=['feComposite']), + DefaultAttribute('k4', 0, elements=['feComposite']), + DefaultAttribute('mode', 'normal', elements=['feBlend']), + DefaultAttribute('numOctaves', 1, elements=['feTurbulence']), + DefaultAttribute('offset', 0, elements=['feFuncA', 'feFuncB', 'feFuncG', 'feFuncR']), + DefaultAttribute('operator', 'over', elements=['feComposite']), + DefaultAttribute('operator', 'erode', elements=['feMorphology']), + DefaultAttribute('order', '3', elements=['feConvolveMatrix']), + DefaultAttribute('pointsAtX', 0, elements=['feSpotLight']), + DefaultAttribute('pointsAtY', 0, elements=['feSpotLight']), + DefaultAttribute('pointsAtZ', 0, elements=['feSpotLight']), + DefaultAttribute('preserveAlpha', 'false', elements=['feConvolveMatrix']), + DefaultAttribute('radius', '0', elements=['feMorphology']), + DefaultAttribute('scale', 0, elements=['feDisplacementMap']), + DefaultAttribute('seed', 0, elements=['feTurbulence']), + DefaultAttribute('specularConstant', 1, elements=['feSpecularLighting']), + DefaultAttribute('specularExponent', 1, elements=['feSpecularLighting', 'feSpotLight']), + DefaultAttribute('stdDeviation', '0', elements=['feGaussianBlur']), + DefaultAttribute('stitchTiles', 'noStitch', elements=['feTurbulence']), + DefaultAttribute('surfaceScale', 1, elements=['feDiffuseLighting', 'feSpecularLighting']), + DefaultAttribute('type', 'matrix', elements=['feColorMatrix']), + DefaultAttribute('type', 'turbulence', elements=['feTurbulence']), + DefaultAttribute('xChannelSelector', 'A', elements=['feDisplacementMap']), + DefaultAttribute('yChannelSelector', 'A', elements=['feDisplacementMap']) +] + +# split to increase lookup performance +# TODO: 'default_attributes_universal' is actually empty right now - will we ever need it? +default_attributes_universal = [] # list containing attributes valid for all elements +default_attributes_per_element = defaultdict(list) # dict containing lists of attributes valid for individual elements +for default_attribute in default_attributes: + if default_attribute.elements is None: + default_attributes_universal.append(default_attribute) + else: + for element in default_attribute.elements: + default_attributes_per_element[element].append(default_attribute) - # Cache this result before returning it. - node.mayContainTextNodes = result - return result def taint(taintedSet, taintedAttribute): - u"""Adds an attribute to a set of attributes. + u"""Adds an attribute to a set of attributes. - Related attributes are also included.""" - taintedSet.add(taintedAttribute) - if taintedAttribute == 'marker': - taintedSet |= set(['marker-start', 'marker-mid', 'marker-end']) - if taintedAttribute in ['marker-start', 'marker-mid', 'marker-end']: - taintedSet.add('marker') - return taintedSet + Related attributes are also included.""" + taintedSet.add(taintedAttribute) + if taintedAttribute == 'marker': + taintedSet |= set(['marker-start', 'marker-mid', 'marker-end']) + if taintedAttribute in ['marker-start', 'marker-mid', 'marker-end']: + taintedSet.add('marker') + return taintedSet -def removeDefaultAttributeValues(node, options, tainted=set()): - u"""'tainted' keeps a set of attributes defined in parent nodes. - For such attributes, we don't delete attributes with default values.""" - num = 0 - if node.nodeType != 1: return 0 +def removeDefaultAttributeValue(node, attribute): + """ + Removes the DefaultAttribute 'attribute' from 'node' if specified conditions are fulfilled - # gradientUnits: objectBoundingBox - if node.getAttribute('gradientUnits') == 'objectBoundingBox': - node.removeAttribute('gradientUnits') - num += 1 + Warning: Does NOT check if the attribute is actually valid for the passed element type for increased performance! + """ + if not node.hasAttribute(attribute.name): + return 0 - # spreadMethod: pad - if node.getAttribute('spreadMethod') == 'pad': - node.removeAttribute('spreadMethod') - num += 1 + # differentiate between text and numeric values + if isinstance(attribute.value, str): + if node.getAttribute(attribute.name) == attribute.value: + if (attribute.conditions is None) or attribute.conditions(node): + node.removeAttribute(attribute.name) + return 1 + else: + nodeValue = SVGLength(node.getAttribute(attribute.name)) + if ((attribute.value is None) + or ((nodeValue.value == attribute.value) and not (nodeValue.units == Unit.INVALID))): + if ((attribute.units is None) + or (nodeValue.units == attribute.units) + or (isinstance(attribute.units, list) and nodeValue.units in attribute.units)): + if (attribute.conditions is None) or attribute.conditions(node): + node.removeAttribute(attribute.name) + return 1 - # x1: 0% - if node.getAttribute('x1') != '': - x1 = SVGLength(node.getAttribute('x1')) - if x1.value == 0: - node.removeAttribute('x1') - num += 1 + return 0 - # y1: 0% - if node.getAttribute('y1') != '': - y1 = SVGLength(node.getAttribute('y1')) - if y1.value == 0: - node.removeAttribute('y1') - num += 1 - # x2: 100% - if node.getAttribute('x2') != '': - x2 = SVGLength(node.getAttribute('x2')) - if (x2.value == 100 and x2.units == Unit.PCT) or (x2.value == 1 and x2.units == Unit.NONE): - node.removeAttribute('x2') - num += 1 +def removeDefaultAttributeValues(node, options, tainted=None): + u"""'tainted' keeps a set of attributes defined in parent nodes. - # y2: 0% - if node.getAttribute('y2') != '': - y2 = SVGLength(node.getAttribute('y2')) - if y2.value == 0: - node.removeAttribute('y2') - num += 1 + For such attributes, we don't delete attributes with default values.""" + num = 0 + if node.nodeType != Node.ELEMENT_NODE: + return 0 - # fx: equal to rx - if node.getAttribute('fx') != '': - if node.getAttribute('fx') == node.getAttribute('cx'): - node.removeAttribute('fx') - num += 1 + if tainted is None: + tainted = set() - # fy: equal to ry - if node.getAttribute('fy') != '': - if node.getAttribute('fy') == node.getAttribute('cy'): - node.removeAttribute('fy') - num += 1 + # Conditionally remove all default attributes defined in 'default_attributes' (a list of 'DefaultAttribute's) + # + # For increased performance do not iterate the whole list for each element but run only on valid subsets + # - 'default_attributes_universal' (attributes valid for all elements) + # - 'default_attributes_per_element' (attributes specific to one specific element type) + for attribute in default_attributes_universal: + num += removeDefaultAttributeValue(node, attribute) + if node.nodeName in default_attributes_per_element: + for attribute in default_attributes_per_element[node.nodeName]: + num += removeDefaultAttributeValue(node, attribute) - # cx: 50% - if node.getAttribute('cx') != '': - cx = SVGLength(node.getAttribute('cx')) - if (cx.value == 50 and cx.units == Unit.PCT) or (cx.value == 0.5 and cx.units == Unit.NONE): - node.removeAttribute('cx') - num += 1 + # Summarily get rid of default properties + attributes = [node.attributes.item(i).nodeName for i in range(node.attributes.length)] + for attribute in attributes: + if attribute not in tainted: + if attribute in default_properties: + if node.getAttribute(attribute) == default_properties[attribute]: + node.removeAttribute(attribute) + num += 1 + else: + tainted = taint(tainted, attribute) + # Properties might also occur as styles, remove them too + styles = _getStyle(node) + for attribute in list(styles): + if attribute not in tainted: + if attribute in default_properties: + if styles[attribute] == default_properties[attribute]: + del styles[attribute] + num += 1 + else: + tainted = taint(tainted, attribute) + _setStyle(node, styles) - # cy: 50% - if node.getAttribute('cy') != '': - cy = SVGLength(node.getAttribute('cy')) - if (cy.value == 50 and cy.units == Unit.PCT) or (cy.value == 0.5 and cy.units == Unit.NONE): - node.removeAttribute('cy') - num += 1 + # recurse for our child elements + for child in node.childNodes: + num += removeDefaultAttributeValues(child, options, tainted.copy()) - # r: 50% - if node.getAttribute('r') != '': - r = SVGLength(node.getAttribute('r')) - if (r.value == 50 and r.units == Unit.PCT) or (r.value == 0.5 and r.units == Unit.NONE): - node.removeAttribute('r') - num += 1 + return num - # Summarily get rid of some more attributes - attributes = [node.attributes.item(i).nodeName - for i in range(node.attributes.length)] - for attribute in attributes: - if attribute not in tainted: - if attribute in default_attributes.keys(): - if node.getAttribute(attribute) == default_attributes[attribute]: - node.removeAttribute(attribute) - num += 1 - else: - tainted = taint(tainted, attribute) - # These attributes might also occur as styles - styles = _getStyle(node) - for attribute in styles.keys(): - if attribute not in tainted: - if attribute in default_attributes.keys(): - if styles[attribute] == default_attributes[attribute]: - del styles[attribute] - num += 1 - else: - tainted = taint(tainted, attribute) - _setStyle(node, styles) - - # recurse for our child elements - for child in node.childNodes : - num += removeDefaultAttributeValues(child, options, tainted.copy()) - - return num rgb = re.compile(r"\s*rgb\(\s*(\d+)\s*,\s*(\d+)\s*,\s*(\d+)\s*\)\s*") rgbp = re.compile(r"\s*rgb\(\s*(\d*\.?\d+)%\s*,\s*(\d*\.?\d+)%\s*,\s*(\d*\.?\d+)%\s*\)\s*") + + def convertColor(value): - """ - Converts the input color string and returns a #RRGGBB (or #RGB if possible) string - """ - s = value + """ + Converts the input color string and returns a #RRGGBB (or #RGB if possible) string + """ + s = value - if s in colors.keys(): - s = colors[s] + if s in colors: + s = colors[s] - rgbpMatch = rgbp.match(s) - if rgbpMatch != None : - r = int(float(rgbpMatch.group(1)) * 255.0 / 100.0) - g = int(float(rgbpMatch.group(2)) * 255.0 / 100.0) - b = int(float(rgbpMatch.group(3)) * 255.0 / 100.0) - s = '#%02x%02x%02x' % (r, g, b) - else: - rgbMatch = rgb.match(s) - if rgbMatch != None : - r = int( rgbMatch.group(1) ) - g = int( rgbMatch.group(2) ) - b = int( rgbMatch.group(3) ) - s = '#%02x%02x%02x' % (r, g, b) + rgbpMatch = rgbp.match(s) + if rgbpMatch is not None: + r = int(float(rgbpMatch.group(1)) * 255.0 / 100.0) + g = int(float(rgbpMatch.group(2)) * 255.0 / 100.0) + b = int(float(rgbpMatch.group(3)) * 255.0 / 100.0) + s = '#%02x%02x%02x' % (r, g, b) + else: + rgbMatch = rgb.match(s) + if rgbMatch is not None: + r = int(rgbMatch.group(1)) + g = int(rgbMatch.group(2)) + b = int(rgbMatch.group(3)) + s = '#%02x%02x%02x' % (r, g, b) - if s[0] == '#': - s = s.lower() - if len(s)==7 and s[1]==s[2] and s[3]==s[4] and s[5]==s[6]: - s = '#'+s[1]+s[3]+s[5] + if s[0] == '#': + s = s.lower() + if len(s) == 7 and s[1] == s[2] and s[3] == s[4] and s[5] == s[6]: + s = '#' + s[1] + s[3] + s[5] - return s + return s -def convertColors(element) : - """ - Recursively converts all color properties into #RRGGBB format if shorter - """ - numBytes = 0 - if element.nodeType != 1: return 0 +def convertColors(element): + """ + Recursively converts all color properties into #RRGGBB format if shorter + """ + numBytes = 0 - # set up list of color attributes for each element type - attrsToConvert = [] - if element.nodeName in ['rect', 'circle', 'ellipse', 'polygon', \ - 'line', 'polyline', 'path', 'g', 'a']: - attrsToConvert = ['fill', 'stroke'] - elif element.nodeName in ['stop']: - attrsToConvert = ['stop-color'] - elif element.nodeName in ['solidColor']: - attrsToConvert = ['solid-color'] + if element.nodeType != Node.ELEMENT_NODE: + return 0 - # now convert all the color formats - styles = _getStyle(element) - for attr in attrsToConvert: - oldColorValue = element.getAttribute(attr) - if oldColorValue != '': - newColorValue = convertColor(oldColorValue) - oldBytes = len(oldColorValue) - newBytes = len(newColorValue) - if oldBytes > newBytes: - element.setAttribute(attr, newColorValue) - numBytes += (oldBytes - len(element.getAttribute(attr))) - # colors might also hide in styles - if attr in styles.keys(): - oldColorValue = styles[attr] - newColorValue = convertColor(oldColorValue) - oldBytes = len(oldColorValue) - newBytes = len(newColorValue) - if oldBytes > newBytes: - styles[attr] = newColorValue - numBytes += (oldBytes - len(element.getAttribute(attr))) - _setStyle(element, styles) + # set up list of color attributes for each element type + attrsToConvert = [] + if element.nodeName in ['rect', 'circle', 'ellipse', 'polygon', + 'line', 'polyline', 'path', 'g', 'a']: + attrsToConvert = ['fill', 'stroke'] + elif element.nodeName in ['stop']: + attrsToConvert = ['stop-color'] + elif element.nodeName in ['solidColor']: + attrsToConvert = ['solid-color'] - # now recurse for our child elements - for child in element.childNodes : - numBytes += convertColors(child) + # now convert all the color formats + styles = _getStyle(element) + for attr in attrsToConvert: + oldColorValue = element.getAttribute(attr) + if oldColorValue != '': + newColorValue = convertColor(oldColorValue) + oldBytes = len(oldColorValue) + newBytes = len(newColorValue) + if oldBytes > newBytes: + element.setAttribute(attr, newColorValue) + numBytes += (oldBytes - len(element.getAttribute(attr))) + # colors might also hide in styles + if attr in styles: + oldColorValue = styles[attr] + newColorValue = convertColor(oldColorValue) + oldBytes = len(oldColorValue) + newBytes = len(newColorValue) + if oldBytes > newBytes: + styles[attr] = newColorValue + numBytes += (oldBytes - newBytes) + _setStyle(element, styles) - return numBytes + # now recurse for our child elements + for child in element.childNodes: + numBytes += convertColors(child) + + return numBytes # TODO: go over what this method does and see if there is a way to optimize it # TODO: go over the performance of this method and see if I can save memory/speed by # reusing data structures, etc -def cleanPath(element, options) : - """ - Cleans the path string (d attribute) of the element - """ - global numBytesSavedInPathData - global numPathSegmentsReduced - global numCurvesStraightened - # this gets the parser object from svg_regex.py - oldPathStr = element.getAttribute('d') - path = svg_parser.parse(oldPathStr) - # This determines whether the stroke has round linecaps. If it does, - # we do not want to collapse empty segments, as they are actually rendered. - withRoundLineCaps = element.getAttribute('stroke-linecap') == 'round' +def clean_path(element, options, stats): + """ + Cleans the path string (d attribute) of the element + """ - # The first command must be a moveto, and whether it's relative (m) - # or absolute (M), the first set of coordinates *is* absolute. So - # the first iteration of the loop below will get x,y and startx,starty. + # this gets the parser object from svg_regex.py + oldPathStr = element.getAttribute('d') + path = svg_parser.parse(oldPathStr) + style = _getStyle(element) - # convert absolute coordinates into relative ones. - # Reuse the data structure 'path', since we're not adding or removing subcommands. - # Also reuse the coordinate lists since we're not adding or removing any. - for pathIndex in xrange(0, len(path)): - cmd, data = path[pathIndex] # Changes to cmd don't get through to the data structure - i = 0 - # adjust abs to rel - # only the A command has some values that we don't want to adjust (radii, rotation, flags) - if cmd == 'A': - for i in xrange(i, len(data), 7): - data[i+5] -= x - data[i+6] -= y - x += data[i+5] - y += data[i+6] - path[pathIndex] = ('a', data) - elif cmd == 'a': - x += sum(data[5::7]) - y += sum(data[6::7]) - elif cmd == 'H': - for i in xrange(i, len(data)): - data[i] -= x - x += data[i] - path[pathIndex] = ('h', data) - elif cmd == 'h': - x += sum(data) - elif cmd == 'V': - for i in xrange(i, len(data)): - data[i] -= y - y += data[i] - path[pathIndex] = ('v', data) - elif cmd == 'v': - y += sum(data) - elif cmd == 'M': - startx, starty = data[0], data[1] - # If this is a path starter, don't convert its first - # coordinate to relative; that would just make it (0, 0) - if pathIndex != 0: - data[0] -= x - data[1] -= y + # This determines whether the stroke has round or square linecaps. If it does, we do not want to collapse empty + # segments, as they are actually rendered (as circles or squares with diameter/dimension matching the path-width). + has_round_or_square_linecaps = ( + element.getAttribute('stroke-linecap') in ['round', 'square'] + or 'stroke-linecap' in style and style['stroke-linecap'] in ['round', 'square'] + ) - x, y = startx, starty - i = 2 - for i in xrange(i, len(data), 2): - data[i] -= x - data[i+1] -= y - x += data[i] - y += data[i+1] - path[pathIndex] = ('m', data) - elif cmd in ['L','T']: - for i in xrange(i, len(data), 2): - data[i] -= x - data[i+1] -= y - x += data[i] - y += data[i+1] - path[pathIndex] = (cmd.lower(), data) - elif cmd in ['m']: - if pathIndex == 0: - # START OF PATH - this is an absolute moveto - # followed by relative linetos + # This determines whether the stroke has intermediate markers. If it does, we do not want to collapse + # straight segments running in the same direction, as markers are rendered on the intermediate nodes. + has_intermediate_markers = ( + element.hasAttribute('marker') + or element.hasAttribute('marker-mid') + or 'marker' in style + or 'marker-mid' in style + ) + + # The first command must be a moveto, and whether it's relative (m) + # or absolute (M), the first set of coordinates *is* absolute. So + # the first iteration of the loop below will get x,y and startx,starty. + + # convert absolute coordinates into relative ones. + # Reuse the data structure 'path', since we're not adding or removing subcommands. + # Also reuse the coordinate lists since we're not adding or removing any. + x = y = 0 + for pathIndex in range(len(path)): + cmd, data = path[pathIndex] # Changes to cmd don't get through to the data structure + i = 0 + # adjust abs to rel + # only the A command has some values that we don't want to adjust (radii, rotation, flags) + if cmd == 'A': + for i in range(i, len(data), 7): + data[i + 5] -= x + data[i + 6] -= y + x += data[i + 5] + y += data[i + 6] + path[pathIndex] = ('a', data) + elif cmd == 'a': + x += sum(data[5::7]) + y += sum(data[6::7]) + elif cmd == 'H': + for i in range(i, len(data)): + data[i] -= x + x += data[i] + path[pathIndex] = ('h', data) + elif cmd == 'h': + x += sum(data) + elif cmd == 'V': + for i in range(i, len(data)): + data[i] -= y + y += data[i] + path[pathIndex] = ('v', data) + elif cmd == 'v': + y += sum(data) + elif cmd == 'M': startx, starty = data[0], data[1] + # If this is a path starter, don't convert its first + # coordinate to relative; that would just make it (0, 0) + if pathIndex != 0: + data[0] -= x + data[1] -= y + x, y = startx, starty i = 2 - else: - startx = x + data[0] - starty = y + data[1] - for i in xrange(i, len(data), 2): - x += data[i] - y += data[i+1] - elif cmd in ['l','t']: - x += sum(data[0::2]) - y += sum(data[1::2]) - elif cmd in ['S','Q']: - for i in xrange(i, len(data), 4): - data[i] -= x - data[i+1] -= y - data[i+2] -= x - data[i+3] -= y - x += data[i+2] - y += data[i+3] - path[pathIndex] = (cmd.lower(), data) - elif cmd in ['s','q']: - x += sum(data[2::4]) - y += sum(data[3::4]) - elif cmd == 'C': - for i in xrange(i, len(data), 6): - data[i] -= x - data[i+1] -= y - data[i+2] -= x - data[i+3] -= y - data[i+4] -= x - data[i+5] -= y - x += data[i+4] - y += data[i+5] - path[pathIndex] = ('c', data) - elif cmd == 'c': - x += sum(data[4::6]) - y += sum(data[5::6]) - elif cmd in ['z','Z']: - x, y = startx, starty - path[pathIndex] = ('z', data) + for i in range(i, len(data), 2): + data[i] -= x + data[i + 1] -= y + x += data[i] + y += data[i + 1] + path[pathIndex] = ('m', data) + elif cmd in ['L', 'T']: + for i in range(i, len(data), 2): + data[i] -= x + data[i + 1] -= y + x += data[i] + y += data[i + 1] + path[pathIndex] = (cmd.lower(), data) + elif cmd in ['m']: + if pathIndex == 0: + # START OF PATH - this is an absolute moveto + # followed by relative linetos + startx, starty = data[0], data[1] + x, y = startx, starty + i = 2 + else: + startx = x + data[0] + starty = y + data[1] + for i in range(i, len(data), 2): + x += data[i] + y += data[i + 1] + elif cmd in ['l', 't']: + x += sum(data[0::2]) + y += sum(data[1::2]) + elif cmd in ['S', 'Q']: + for i in range(i, len(data), 4): + data[i] -= x + data[i + 1] -= y + data[i + 2] -= x + data[i + 3] -= y + x += data[i + 2] + y += data[i + 3] + path[pathIndex] = (cmd.lower(), data) + elif cmd in ['s', 'q']: + x += sum(data[2::4]) + y += sum(data[3::4]) + elif cmd == 'C': + for i in range(i, len(data), 6): + data[i] -= x + data[i + 1] -= y + data[i + 2] -= x + data[i + 3] -= y + data[i + 4] -= x + data[i + 5] -= y + x += data[i + 4] + y += data[i + 5] + path[pathIndex] = ('c', data) + elif cmd == 'c': + x += sum(data[4::6]) + y += sum(data[5::6]) + elif cmd in ['z', 'Z']: + x, y = startx, starty + path[pathIndex] = ('z', data) - # remove empty segments - # Reuse the data structure 'path' and the coordinate lists, even if we're - # deleting items, because these deletions are relatively cheap. - if not withRoundLineCaps: - for pathIndex in xrange(0, len(path)): - cmd, data = path[pathIndex] - i = 0 - if cmd in ['m','l','t']: - if cmd == 'm': - # remove m0,0 segments - if pathIndex > 0 and data[0] == data[i+1] == 0: - # 'm0,0 x,y' can be replaces with 'lx,y', - # except the first m which is a required absolute moveto - path[pathIndex] = ('l', data[2:]) - numPathSegmentsReduced += 1 - else: # else skip move coordinate - i = 2 + # remove empty segments and redundant commands + # Reuse the data structure 'path' and the coordinate lists, even if we're + # deleting items, because these deletions are relatively cheap. + if not has_round_or_square_linecaps: + # remove empty path segments + for pathIndex in range(len(path)): + cmd, data = path[pathIndex] + i = 0 + if cmd in ['m', 'l', 't']: + if cmd == 'm': + # It might be tempting to rewrite "m0 0 ..." into + # "l..." here. However, this is an unsound + # optimization in general as "m0 0 ... z" is + # different from "l...z". + # + # To do such a rewrite, we need to understand the + # full subpath. This logic happens after this + # loop. + i = 2 + while i < len(data): + if data[i] == data[i + 1] == 0: + del data[i:i + 2] + stats.num_path_segments_removed += 1 + else: + i += 2 + elif cmd == 'c': + while i < len(data): + if data[i] == data[i + 1] == data[i + 2] == data[i + 3] == data[i + 4] == data[i + 5] == 0: + del data[i:i + 6] + stats.num_path_segments_removed += 1 + else: + i += 6 + elif cmd == 'a': + while i < len(data): + if data[i + 5] == data[i + 6] == 0: + del data[i:i + 7] + stats.num_path_segments_removed += 1 + else: + i += 7 + elif cmd == 'q': + while i < len(data): + if data[i] == data[i + 1] == data[i + 2] == data[i + 3] == 0: + del data[i:i + 4] + stats.num_path_segments_removed += 1 + else: + i += 4 + elif cmd in ['h', 'v']: + oldLen = len(data) + path[pathIndex] = (cmd, [coord for coord in data if coord != 0]) + stats.num_path_segments_removed += len(path[pathIndex][1]) - oldLen + + # remove no-op commands + pathIndex = len(path) + subpath_needs_anchor = False + # NB: We can never rewrite the first m/M command (expect if it + # is the only command) + while pathIndex > 1: + pathIndex -= 1 + cmd, data = path[pathIndex] + if cmd == 'z': + next_cmd, next_data = path[pathIndex - 1] + if next_cmd == 'm' and len(next_data) == 2: + # mX Yz -> mX Y + + # note the len check on next_data as it is not + # safe to rewrite "m0 0 1 1z" in general (it is a + # question of where the "pen" ends - you can + # continue a draw on the same subpath after a + # "z"). + del path[pathIndex] + stats.num_path_segments_removed += 1 + else: + # it is not safe to rewrite "m0 0 ..." to "l..." + # because of this "z" command. + subpath_needs_anchor = True + elif cmd == 'm': + if len(path) - 1 == pathIndex and len(data) == 2: + # Ends with an empty move (but no line/draw + # following it) + del path[pathIndex] + stats.num_path_segments_removed += 1 + continue + if subpath_needs_anchor: + subpath_needs_anchor = False + elif data[0] == data[1] == 0: + # unanchored, i.e. we can replace "m0 0 ..." with + # "l..." as there is no "z" after it. + path[pathIndex] = ('l', data[2:]) + stats.num_path_segments_removed += 1 + + # fixup: Delete subcommands having no coordinates. + path = [elem for elem in path if len(elem[1]) > 0 or elem[0] == 'z'] + + # convert straight curves into lines + newPath = [path[0]] + for (cmd, data) in path[1:]: + i = 0 + newData = data + if cmd == 'c': + newData = [] while i < len(data): - if data[i] == data[i+1] == 0: - del data[i:i+2] - numPathSegmentsReduced += 1 - else: - i += 2 - elif cmd == 'c': + # since all commands are now relative, we can think of previous point as (0,0) + # and new point (dx,dy) is (data[i+4],data[i+5]) + # eqn of line will be y = (dy/dx)*x or if dx=0 then eqn of line is x=0 + (p1x, p1y) = (data[i], data[i + 1]) + (p2x, p2y) = (data[i + 2], data[i + 3]) + dx = data[i + 4] + dy = data[i + 5] + + foundStraightCurve = False + + if dx == 0: + if p1x == 0 and p2x == 0: + foundStraightCurve = True + else: + m = dy / dx + if p1y == m * p1x and p2y == m * p2x: + foundStraightCurve = True + + if foundStraightCurve: + # flush any existing curve coords first + if newData: + newPath.append((cmd, newData)) + newData = [] + # now create a straight line segment + newPath.append(('l', [dx, dy])) + else: + newData.extend(data[i:i + 6]) + + i += 6 + if newData or cmd == 'z' or cmd == 'Z': + newPath.append((cmd, newData)) + path = newPath + + # collapse all consecutive commands of the same type into one command + prevCmd = '' + prevData = [] + newPath = [] + for (cmd, data) in path: + if prevCmd == '': + # initialize with current path cmd and data + prevCmd = cmd + prevData = data + else: + # collapse if + # - cmd is not moveto (explicit moveto commands are not drawn) + # - the previous and current commands are the same type, + # - the previous command is moveto and the current is lineto + # (subsequent moveto pairs are treated as implicit lineto commands) + if cmd != 'm' and (cmd == prevCmd or (cmd == 'l' and prevCmd == 'm')): + prevData.extend(data) + # else flush the previous command if it is not the same type as the current command + else: + newPath.append((prevCmd, prevData)) + prevCmd = cmd + prevData = data + # flush last command and data + newPath.append((prevCmd, prevData)) + path = newPath + + # convert to shorthand path segments where possible + newPath = [] + for (cmd, data) in path: + # convert line segments into h,v where possible + if cmd == 'l': + i = 0 + lineTuples = [] while i < len(data): - if data[i] == data[i+1] == data[i+2] == data[i+3] == data[i+4] == data[i+5] == 0: - del data[i:i+6] - numPathSegmentsReduced += 1 - else: - i += 6 - elif cmd == 'a': + if data[i] == 0: + # vertical + if lineTuples: + # flush the existing line command + newPath.append(('l', lineTuples)) + lineTuples = [] + # append the v and then the remaining line coords + newPath.append(('v', [data[i + 1]])) + stats.num_path_segments_removed += 1 + elif data[i + 1] == 0: + if lineTuples: + # flush the line command, then append the h and then the remaining line coords + newPath.append(('l', lineTuples)) + lineTuples = [] + newPath.append(('h', [data[i]])) + stats.num_path_segments_removed += 1 + else: + lineTuples.extend(data[i:i + 2]) + i += 2 + if lineTuples: + newPath.append(('l', lineTuples)) + # also handle implied relative linetos + elif cmd == 'm': + i = 2 + lineTuples = [data[0], data[1]] while i < len(data): - if data[i+5] == data[i+6] == 0: - del data[i:i+7] - numPathSegmentsReduced += 1 - else: - i += 7 - elif cmd == 'q': + if data[i] == 0: + # vertical + if lineTuples: + # flush the existing m/l command + newPath.append((cmd, lineTuples)) + lineTuples = [] + cmd = 'l' # dealing with linetos now + # append the v and then the remaining line coords + newPath.append(('v', [data[i + 1]])) + stats.num_path_segments_removed += 1 + elif data[i + 1] == 0: + if lineTuples: + # flush the m/l command, then append the h and then the remaining line coords + newPath.append((cmd, lineTuples)) + lineTuples = [] + cmd = 'l' # dealing with linetos now + newPath.append(('h', [data[i]])) + stats.num_path_segments_removed += 1 + else: + lineTuples.extend(data[i:i + 2]) + i += 2 + if lineTuples: + newPath.append((cmd, lineTuples)) + # convert Bézier curve segments into s where possible + elif cmd == 'c': + # set up the assumed bezier control point as the current point, + # i.e. (0,0) since we're using relative coords + bez_ctl_pt = (0, 0) + # however if the previous command was 's' + # the assumed control point is a reflection of the previous control point at the current point + if len(newPath): + (prevCmd, prevData) = newPath[-1] + if prevCmd == 's': + bez_ctl_pt = (prevData[-2] - prevData[-4], prevData[-1] - prevData[-3]) + i = 0 + curveTuples = [] while i < len(data): - if data[i] == data[i+1] == data[i+2] == data[i+3] == 0: - del data[i:i+4] - numPathSegmentsReduced += 1 - else: - i += 4 - elif cmd in ['h','v']: - oldLen = len(data) - path[pathIndex] = (cmd, [coord for coord in data if coord != 0]) - numPathSegmentsReduced += len(path[pathIndex][1]) - oldLen + # rotate by 180deg means negate both coordinates + # if the previous control point is equal then we can substitute a + # shorthand bezier command + if bez_ctl_pt[0] == data[i] and bez_ctl_pt[1] == data[i + 1]: + if curveTuples: + newPath.append(('c', curveTuples)) + curveTuples = [] + # append the s command + newPath.append(('s', [data[i + 2], data[i + 3], data[i + 4], data[i + 5]])) + stats.num_path_segments_removed += 1 + else: + j = 0 + while j <= 5: + curveTuples.append(data[i + j]) + j += 1 - # fixup: Delete subcommands having no coordinates. - path = [elem for elem in path if len(elem[1]) > 0 or elem[0] == 'z'] + # set up control point for next curve segment + bez_ctl_pt = (data[i + 4] - data[i + 2], data[i + 5] - data[i + 3]) + i += 6 - # convert straight curves into lines - newPath = [path[0]] - for (cmd,data) in path[1:]: - i = 0 - newData = data - if cmd == 'c': - newData = [] - while i < len(data): - # since all commands are now relative, we can think of previous point as (0,0) - # and new point (dx,dy) is (data[i+4],data[i+5]) - # eqn of line will be y = (dy/dx)*x or if dx=0 then eqn of line is x=0 - (p1x,p1y) = (data[i],data[i+1]) - (p2x,p2y) = (data[i+2],data[i+3]) - dx = data[i+4] - dy = data[i+5] + if curveTuples: + newPath.append(('c', curveTuples)) + # convert quadratic curve segments into t where possible + elif cmd == 'q': + quad_ctl_pt = (0, 0) + i = 0 + curveTuples = [] + while i < len(data): + if quad_ctl_pt[0] == data[i] and quad_ctl_pt[1] == data[i + 1]: + if curveTuples: + newPath.append(('q', curveTuples)) + curveTuples = [] + # append the t command + newPath.append(('t', [data[i + 2], data[i + 3]])) + stats.num_path_segments_removed += 1 + else: + j = 0 + while j <= 3: + curveTuples.append(data[i + j]) + j += 1 - foundStraightCurve = False + quad_ctl_pt = (data[i + 2] - data[i], data[i + 3] - data[i + 1]) + i += 4 - if dx == 0: - if p1x == 0 and p2x == 0: - foundStraightCurve = True - else: - m = dy/dx - if p1y == m*p1x and p2y == m*p2x: - foundStraightCurve = True + if curveTuples: + newPath.append(('q', curveTuples)) + else: + newPath.append((cmd, data)) + path = newPath - if foundStraightCurve: - # flush any existing curve coords first - if newData: - newPath.append( (cmd,newData) ) - newData = [] - # now create a straight line segment - newPath.append( ('l', [dx,dy]) ) - numCurvesStraightened += 1 - else: - newData.extend(data[i:i+6]) + # For each m, l, h or v, collapse unnecessary coordinates that run in the same direction + # i.e. "h-100-100" becomes "h-200" but "h300-100" does not change. + # If the path has intermediate markers we have to preserve intermediate nodes, though. + # Reuse the data structure 'path', since we're not adding or removing subcommands. + # Also reuse the coordinate lists, even if we're deleting items, because these + # deletions are relatively cheap. + if not has_intermediate_markers: + for pathIndex in range(len(path)): + cmd, data = path[pathIndex] - i += 6 - if newData or cmd == 'z' or cmd == 'Z': - newPath.append( (cmd,newData) ) - path = newPath + # h / v expects only one parameter and we start drawing with the first (so we need at least 2) + if cmd in ['h', 'v'] and len(data) >= 2: + coordIndex = 0 + while coordIndex+1 < len(data): + if is_same_sign(data[coordIndex], data[coordIndex+1]): + data[coordIndex] += data[coordIndex+1] + del data[coordIndex+1] + stats.num_path_segments_removed += 1 + else: + coordIndex += 1 - # collapse all consecutive commands of the same type into one command - prevCmd = '' - prevData = [] - newPath = [] - for (cmd,data) in path: - # flush the previous command if it is not the same type as the current command - if prevCmd != '': - if cmd != prevCmd or cmd == 'm': - newPath.append( (prevCmd, prevData) ) - prevCmd = '' - prevData = [] + # l expects two parameters and we start drawing with the first (so we need at least 4) + elif cmd == 'l' and len(data) >= 4: + coordIndex = 0 + while coordIndex+2 < len(data): + if is_same_direction(*data[coordIndex:coordIndex+4]): + data[coordIndex] += data[coordIndex+2] + data[coordIndex+1] += data[coordIndex+3] + del data[coordIndex+2] # delete the next two elements + del data[coordIndex+2] + stats.num_path_segments_removed += 1 + else: + coordIndex += 2 - # if the previous and current commands are the same type, - # or the previous command is moveto and the current is lineto, collapse, - # but only if they are not move commands (since move can contain implicit lineto commands) - if (cmd == prevCmd or (cmd == 'l' and prevCmd == 'm')) and cmd != 'm': - prevData.extend(data) + # m expects two parameters but we have to skip the first pair as it's not drawn (so we need at least 6) + elif cmd == 'm' and len(data) >= 6: + coordIndex = 2 + while coordIndex+2 < len(data): + if is_same_direction(*data[coordIndex:coordIndex+4]): + data[coordIndex] += data[coordIndex+2] + data[coordIndex+1] += data[coordIndex+3] + del data[coordIndex+2] # delete the next two elements + del data[coordIndex+2] + stats.num_path_segments_removed += 1 + else: + coordIndex += 2 - # save last command and data - else: - prevCmd = cmd - prevData = data - # flush last command and data - if prevCmd != '': - newPath.append( (prevCmd, prevData) ) - path = newPath + # it is possible that we have consecutive h, v, c, t commands now + # so again collapse all consecutive commands of the same type into one command + prevCmd = '' + prevData = [] + newPath = [path[0]] + for (cmd, data) in path[1:]: + # flush the previous command if it is not the same type as the current command + if prevCmd != '': + if cmd != prevCmd or cmd == 'm': + newPath.append((prevCmd, prevData)) + prevCmd = '' + prevData = [] - # convert to shorthand path segments where possible - newPath = [] - for (cmd,data) in path: - # convert line segments into h,v where possible - if cmd == 'l': - i = 0 - lineTuples = [] - while i < len(data): - if data[i] == 0: - # vertical - if lineTuples: - # flush the existing line command - newPath.append( ('l', lineTuples) ) - lineTuples = [] - # append the v and then the remaining line coords - newPath.append( ('v', [data[i+1]]) ) - numPathSegmentsReduced += 1 - elif data[i+1] == 0: - if lineTuples: - # flush the line command, then append the h and then the remaining line coords - newPath.append( ('l', lineTuples) ) - lineTuples = [] - newPath.append( ('h', [data[i]]) ) - numPathSegmentsReduced += 1 - else: - lineTuples.extend(data[i:i+2]) - i += 2 - if lineTuples: - newPath.append( ('l', lineTuples) ) - # also handle implied relative linetos - elif cmd == 'm': - i = 2 - lineTuples = [data[0], data[1]] - while i < len(data): - if data[i] == 0: - # vertical - if lineTuples: - # flush the existing m/l command - newPath.append( (cmd, lineTuples) ) - lineTuples = [] - cmd = 'l' # dealing with linetos now - # append the v and then the remaining line coords - newPath.append( ('v', [data[i+1]]) ) - numPathSegmentsReduced += 1 - elif data[i+1] == 0: - if lineTuples: - # flush the m/l command, then append the h and then the remaining line coords - newPath.append( (cmd, lineTuples) ) - lineTuples = [] - cmd = 'l' # dealing with linetos now - newPath.append( ('h', [data[i]]) ) - numPathSegmentsReduced += 1 - else: - lineTuples.extend(data[i:i+2]) - i += 2 - if lineTuples: - newPath.append( (cmd, lineTuples) ) - # convert Bézier curve segments into s where possible - elif cmd == 'c': - bez_ctl_pt = (0,0) - i = 0 - curveTuples = [] - while i < len(data): - # rotate by 180deg means negate both coordinates - # if the previous control point is equal then we can substitute a - # shorthand bezier command - if bez_ctl_pt[0] == data[i] and bez_ctl_pt[1] == data[i+1]: - if curveTuples: - newPath.append( ('c', curveTuples) ) - curveTuples = [] - # append the s command - newPath.append( ('s', [data[i+2], data[i+3], data[i+4], data[i+5]]) ) - numPathSegmentsReduced += 1 - else: - j = 0 - while j <= 5: - curveTuples.append(data[i+j]) - j += 1 - - # set up control point for next curve segment - bez_ctl_pt = (data[i+4]-data[i+2], data[i+5]-data[i+3]) - i += 6 - - if curveTuples: - newPath.append( ('c', curveTuples) ) - # convert quadratic curve segments into t where possible - elif cmd == 'q': - quad_ctl_pt = (0,0) - i = 0 - curveTuples = [] - while i < len(data): - if quad_ctl_pt[0] == data[i] and quad_ctl_pt[1] == data[i+1]: - if curveTuples: - newPath.append( ('q', curveTuples) ) - curveTuples = [] - # append the t command - newPath.append( ('t', [data[i+2], data[i+3]]) ) - numPathSegmentsReduced += 1 - else: - j = 0; - while j <= 3: - curveTuples.append(data[i+j]) - j += 1 - - quad_ctl_pt = (data[i+2]-data[i], data[i+3]-data[i+1]) - i += 4 - - if curveTuples: - newPath.append( ('q', curveTuples) ) - else: - newPath.append( (cmd, data) ) - path = newPath - - # for each h or v, collapse unnecessary coordinates that run in the same direction - # i.e. "h-100-100" becomes "h-200" but "h300-100" does not change - # Reuse the data structure 'path', since we're not adding or removing subcommands. - # Also reuse the coordinate lists, even if we're deleting items, because these - # deletions are relatively cheap. - for pathIndex in xrange(1, len(path)): - cmd, data = path[pathIndex] - if cmd in ['h','v'] and len(data) > 1: - coordIndex = 1 - while coordIndex < len(data): - if isSameSign(data[coordIndex - 1], data[coordIndex]): - data[coordIndex - 1] += data[coordIndex] - del data[coordIndex] - numPathSegmentsReduced += 1 - else: - coordIndex += 1 - - # it is possible that we have consecutive h, v, c, t commands now - # so again collapse all consecutive commands of the same type into one command - prevCmd = '' - prevData = [] - newPath = [path[0]] - for (cmd,data) in path[1:]: - # flush the previous command if it is not the same type as the current command - if prevCmd != '': - if cmd != prevCmd or cmd == 'm': - newPath.append( (prevCmd, prevData) ) - prevCmd = '' - prevData = [] - - # if the previous and current commands are the same type, collapse - if cmd == prevCmd and cmd != 'm': + # if the previous and current commands are the same type, collapse + if cmd == prevCmd and cmd != 'm': prevData.extend(data) - # save last command and data - else: - prevCmd = cmd - prevData = data - # flush last command and data - if prevCmd != '': - newPath.append( (prevCmd, prevData) ) - path = newPath + # save last command and data + else: + prevCmd = cmd + prevData = data + # flush last command and data + if prevCmd != '': + newPath.append((prevCmd, prevData)) + path = newPath - newPathStr = serializePath(path, options) - numBytesSavedInPathData += ( len(oldPathStr) - len(newPathStr) ) - element.setAttribute('d', newPathStr) + newPathStr = serializePath(path, options) + # if for whatever reason we actually made the path longer don't use it + # TODO: maybe we could compare path lengths after each optimization step and use the shortest + if len(newPathStr) <= len(oldPathStr): + stats.num_bytes_saved_in_path_data += (len(oldPathStr) - len(newPathStr)) + element.setAttribute('d', newPathStr) def parseListOfPoints(s): - """ - Parse string into a list of points. + """ + Parse string into a list of points. - Returns a list of containing an even number of coordinate strings - """ - i = 0 + Returns a list containing an even number of coordinate strings + """ + i = 0 - # (wsp)? comma-or-wsp-separated coordinate pairs (wsp)? - # coordinate-pair = coordinate comma-or-wsp coordinate - # coordinate = sign? integer - # comma-wsp: (wsp+ comma? wsp*) | (comma wsp*) - ws_nums = re.split(r"\s*,?\s*", s.strip()) - nums = [] + # (wsp)? comma-or-wsp-separated coordinate pairs (wsp)? + # coordinate-pair = coordinate comma-or-wsp coordinate + # coordinate = sign? integer + # comma-wsp: (wsp+ comma? wsp*) | (comma wsp*) + ws_nums = RE_COMMA_WSP.split(s.strip()) + nums = [] - # also, if 100-100 is found, split it into two also - # <polygon points="100,-100,100-100,100-100-100,-100-100" /> - for i in xrange(len(ws_nums)): - negcoords = ws_nums[i].split("-") + # also, if 100-100 is found, split it into two also + # <polygon points="100,-100,100-100,100-100-100,-100-100" /> + for i in range(len(ws_nums)): + negcoords = ws_nums[i].split("-") - # this string didn't have any negative coordinates - if len(negcoords) == 1: - nums.append(negcoords[0]) - # we got negative coords - else: - for j in xrange(len(negcoords)): - # first number could be positive - if j == 0: - if negcoords[0] != '': - nums.append(negcoords[0]) - # otherwise all other strings will be negative - else: - # unless we accidentally split a number that was in scientific notation - # and had a negative exponent (500.00e-1) - prev = nums[len(nums)-1] - if prev[len(prev)-1] in ['e', 'E']: - nums[len(nums)-1] = prev + '-' + negcoords[j] - else: - nums.append( '-'+negcoords[j] ) + # this string didn't have any negative coordinates + if len(negcoords) == 1: + nums.append(negcoords[0]) + # we got negative coords + else: + for j in range(len(negcoords)): + # first number could be positive + if j == 0: + if negcoords[0] != '': + nums.append(negcoords[0]) + # otherwise all other strings will be negative + else: + # unless we accidentally split a number that was in scientific notation + # and had a negative exponent (500.00e-1) + prev = "" + if len(nums): + prev = nums[len(nums) - 1] + if prev and prev[len(prev) - 1] in ['e', 'E']: + nums[len(nums) - 1] = prev + '-' + negcoords[j] + else: + nums.append('-' + negcoords[j]) - # if we have an odd number of points, return empty - if len(nums) % 2 != 0: return [] + # if we have an odd number of points, return empty + if len(nums) % 2 != 0: + return [] - # now resolve into Decimal values - i = 0 - while i < len(nums): - try: - nums[i] = getcontext().create_decimal(nums[i]) - nums[i + 1] = getcontext().create_decimal(nums[i + 1]) - except decimal.InvalidOperation: # one of the lengths had a unit or is an invalid number - return [] + # now resolve into Decimal values + i = 0 + while i < len(nums): + try: + nums[i] = getcontext().create_decimal(nums[i]) + nums[i + 1] = getcontext().create_decimal(nums[i + 1]) + except InvalidOperation: # one of the lengths had a unit or is an invalid number + return [] - i += 2 + i += 2 - return nums + return nums +def clean_polygon(elem, options): + """ + Remove unnecessary closing point of polygon points attribute + """ + num_points_removed_from_polygon = 0 -def cleanPolygon(elem, options): - """ - Remove unnecessary closing point of polygon points attribute - """ - global numPointsRemovedFromPolygon - - pts = parseListOfPoints(elem.getAttribute('points')) - N = len(pts)/2 - if N >= 2: - (startx,starty) = pts[:2] - (endx,endy) = pts[-2:] - if startx == endx and starty == endy: - del pts[-2:] - numPointsRemovedFromPolygon += 1 - elem.setAttribute('points', scourCoordinates(pts, options, True)) - + pts = parseListOfPoints(elem.getAttribute('points')) + N = len(pts) / 2 + if N >= 2: + (startx, starty) = pts[:2] + (endx, endy) = pts[-2:] + if startx == endx and starty == endy: + del pts[-2:] + num_points_removed_from_polygon += 1 + elem.setAttribute('points', scourCoordinates(pts, options, True)) + return num_points_removed_from_polygon def cleanPolyline(elem, options): - """ - Scour the polyline points attribute - """ - pts = parseListOfPoints(elem.getAttribute('points')) - elem.setAttribute('points', scourCoordinates(pts, options, True)) + """ + Scour the polyline points attribute + """ + pts = parseListOfPoints(elem.getAttribute('points')) + elem.setAttribute('points', scourCoordinates(pts, options, True)) +def controlPoints(cmd, data): + """ + Checks if there are control points in the path data + + Returns the indices of all values in the path data which are control points + """ + cmd = cmd.lower() + if cmd in ['c', 's', 'q']: + indices = range(len(data)) + if cmd == 'c': # c: (x1 y1 x2 y2 x y)+ + return [index for index in indices if (index % 6) < 4] + elif cmd in ['s', 'q']: # s: (x2 y2 x y)+ q: (x1 y1 x y)+ + return [index for index in indices if (index % 4) < 2] + + return [] + + +def flags(cmd, data): + """ + Checks if there are flags in the path data + + Returns the indices of all values in the path data which are flags + """ + if cmd.lower() == 'a': # a: (rx ry x-axis-rotation large-arc-flag sweep-flag x y)+ + indices = range(len(data)) + return [index for index in indices if (index % 7) in [3, 4]] + + return [] + def serializePath(pathObj, options): - """ - Reserializes the path data with some cleanups. - """ - # elliptical arc commands must have comma/wsp separating the coordinates - # this fixes an issue outlined in Fix https://bugs.launchpad.net/scour/+bug/412754 - return ''.join([cmd + scourCoordinates(data, options, (cmd == 'a')) for cmd, data in pathObj]) - + """ + Reserializes the path data with some cleanups. + """ + # elliptical arc commands must have comma/wsp separating the coordinates + # this fixes an issue outlined in Fix https://bugs.launchpad.net/scour/+bug/412754 + return ''.join(cmd + scourCoordinates(data, options, + control_points=controlPoints(cmd, data), + flags=flags(cmd, data)) + for cmd, data in pathObj) def serializeTransform(transformObj): - """ - Reserializes the transform data with some cleanups. - """ - return ' '.join( - [command + '(' + ' '.join( - [scourUnitlessLength(number) for number in numbers] - ) + ')' - for command, numbers in transformObj] - ) + """ + Reserializes the transform data with some cleanups. + """ + return ' '.join(command + '(' + ' '.join(scourUnitlessLength(number) for number in numbers) + ')' + for command, numbers in transformObj) +def scourCoordinates(data, options, force_whitespace=False, control_points=[], flags=[]): + """ + Serializes coordinate data with some cleanups: + - removes all trailing zeros after the decimal + - integerize coordinates if possible + - removes extraneous whitespace + - adds spaces between values in a subcommand if required (or if force_whitespace is True) + """ + if data is not None: + newData = [] + c = 0 + previousCoord = '' + for coord in data: + is_control_point = c in control_points + scouredCoord = scourUnitlessLength(coord, + nonsci_output=options.nonsci_output, + renderer_workaround=options.renderer_workaround, + is_control_point=is_control_point) + # don't output a space if this number starts with a dot (.) or minus sign (-); we only need a space if + # - this number starts with a digit + # - this number starts with a dot but the previous number had *no* dot or exponent + # i.e. '1.3 0.5' -> '1.3.5' or '1e3 0.5' -> '1e3.5' is fine but '123 0.5' -> '123.5' is obviously not + # - 'force_whitespace' is explicitly set to 'True' + # we never need a space after flags (occurring in elliptical arcs), but librsvg struggles without it + if (c > 0 + and (force_whitespace + or scouredCoord[0].isdigit() + or (scouredCoord[0] == '.' and not ('.' in previousCoord or 'e' in previousCoord))) + and ((c-1 not in flags) or options.renderer_workaround)): + newData.append(' ') -def scourCoordinates(data, options, forceCommaWsp = False): - """ - Serializes coordinate data with some cleanups: - - removes all trailing zeros after the decimal - - integerize coordinates if possible - - removes extraneous whitespace - - adds spaces between values in a subcommand if required (or if forceCommaWsp is True) - """ - if data != None: - newData = [] - c = 0 - previousCoord = '' - for coord in data: - scouredCoord = scourUnitlessLength(coord, needsRendererWorkaround=options.renderer_workaround) - # only need the comma if the current number starts with a digit - # (numbers can start with - without needing a comma before) - # or if forceCommaWsp is True - # or if this number starts with a dot and the previous number - # had *no* dot or exponent (so we can go like -5.5.5 for -5.5,0.5 - # and 4e4.5 for 40000,0.5) - if c > 0 and (forceCommaWsp - or scouredCoord[0].isdigit() - or (scouredCoord[0] == '.' and not ('.' in previousCoord or 'e' in previousCoord)) - ): - newData.append( ' ' ) + # add the scoured coordinate to the path string + newData.append(scouredCoord) + previousCoord = scouredCoord + c += 1 - # add the scoured coordinate to the path string - newData.append( scouredCoord ) - previousCoord = scouredCoord - c += 1 - - # What we need to do to work around GNOME bugs 548494, 563933 and - # 620565, which are being fixed and unfixed in Ubuntu, is - # to make sure that a dot doesn't immediately follow a command - # (so 'h50' and 'h0.5' are allowed, but not 'h.5'). - # Then, we need to add a space character after any coordinates - # having an 'e' (scientific notation), so as to have the exponent - # separate from the next number. - if options.renderer_workaround: - if len(newData) > 0: - for i in xrange(1, len(newData)): - if newData[i][0] == '-' and 'e' in newData[i - 1]: - newData[i - 1] += ' ' + # What we need to do to work around GNOME bugs 548494, 563933 and 620565, is to make sure that a dot doesn't + # immediately follow a command (so 'h50' and 'h0.5' are allowed, but not 'h.5'). + # Then, we need to add a space character after any coordinates having an 'e' (scientific notation), + # so as to have the exponent separate from the next number. + # TODO: Check whether this is still required (bugs all marked as fixed, might be time to phase it out) + if options.renderer_workaround: + if len(newData) > 0: + for i in range(1, len(newData)): + if newData[i][0] == '-' and 'e' in newData[i - 1]: + newData[i - 1] += ' ' + return ''.join(newData) + else: return ''.join(newData) - else: - return ''.join(newData) - - return '' + return '' def scourLength(length): - """ - Scours a length. Accepts units. - """ - length = SVGLength(length) + """ + Scours a length. Accepts units. + """ + length = SVGLength(length) - return scourUnitlessLength(length.value) + Unit.str(length.units) + return scourUnitlessLength(length.value) + Unit.str(length.units) +def scourUnitlessLength(length, nonsci_output=False, renderer_workaround=False, is_control_point=False): # length is of a numeric type -def scourUnitlessLength(length, needsRendererWorkaround=False): # length is of a numeric type - """ - Scours the numeric part of a length only. Does not accept units. + """ + Scours the numeric part of a length only. Does not accept units. - This is faster than scourLength on elements guaranteed not to - contain units. - """ - # reduce to the proper number of digits - if not isinstance(length, Decimal): - length = getcontext().create_decimal(str(length)) - # if the value is an integer, it may still have .0[...] attached to it for some reason - # remove those - if int(length) == length: - length = getcontext().create_decimal(int(length)) + This is faster than scourLength on elements guaranteed not to + contain units. + """ + if not isinstance(length, Decimal): + length = getcontext().create_decimal(str(length)) + initial_length = length - # gather the non-scientific notation version of the coordinate. - # this may actually be in scientific notation if the value is - # sufficiently large or small, so this is a misnomer. - nonsci = unicode(length).lower().replace("e+", "e") - if not needsRendererWorkaround: - if len(nonsci) > 2 and nonsci[:2] == '0.': - nonsci = nonsci[1:] # remove the 0, leave the dot - elif len(nonsci) > 3 and nonsci[:3] == '-0.': - nonsci = '-' + nonsci[2:] # remove the 0, leave the minus and dot + # reduce numeric precision + # plus() corresponds to the unary prefix plus operator and applies context precision and rounding + if is_control_point: + length = scouringContextC.plus(length) + else: + length = scouringContext.plus(length) - if len(nonsci) > 3: # avoid calling normalize unless strictly necessary - # and then the scientific notation version, with E+NUMBER replaced with - # just eNUMBER, since SVG accepts this. - sci = unicode(length.normalize()).lower().replace("e+", "e") + # remove trailing zeroes as we do not care for significance + intLength = length.to_integral_value() + if length == intLength: + length = Decimal(intLength) + else: + length = length.normalize() - if len(sci) < len(nonsci): return sci - else: return nonsci - else: return nonsci + # Gather the non-scientific notation version of the coordinate. + # Re-quantize from the initial value to prevent unnecessary loss of precision + # (e.g. 123.4 should become 123, not 120 or even 100) + nonsci = '{0:f}'.format(length) + nonsci = '{0:f}'.format(initial_length.quantize(Decimal(nonsci))) + if not renderer_workaround: + if len(nonsci) > 2 and nonsci[:2] == '0.': + nonsci = nonsci[1:] # remove the 0, leave the dot + elif len(nonsci) > 3 and nonsci[:3] == '-0.': + nonsci = '-' + nonsci[2:] # remove the 0, leave the minus and dot + return_value = nonsci + # Prevent scientific notation, interferes with some label maker software + if nonsci_output: + return return_value -def reducePrecision(element) : - """ - Because opacities, letter spacings, stroke widths and all that don't need - to be preserved in SVG files with 9 digits of precision. - Takes all of these attributes, in the given element node and its children, - and reduces their precision to the current Decimal context's precision. - Also checks for the attributes actually being lengths, not 'inherit', 'none' - or anything that isn't an SVGLength. + # Gather the scientific notation version of the coordinate which + # can only be shorter if the length of the number is at least 4 characters (e.g. 1000 = 1e3). + if len(nonsci) > 3: + # We have to implement this ourselves since both 'normalize()' and 'to_sci_string()' + # don't handle negative exponents in a reasonable way (e.g. 0.000001 remains unchanged) + exponent = length.adjusted() # how far do we have to shift the dot? + length = length.scaleb(-exponent).normalize() # shift the dot and remove potential trailing zeroes - Returns the number of bytes saved after performing these reductions. - """ - num = 0 + sci = six.text_type(length) + 'e' + six.text_type(exponent) - styles = _getStyle(element) - for lengthAttr in ['opacity', 'flood-opacity', 'fill-opacity', - 'stroke-opacity', 'stop-opacity', 'stroke-miterlimit', - 'stroke-dashoffset', 'letter-spacing', 'word-spacing', - 'kerning', 'font-size-adjust', 'font-size', - 'stroke-width']: - val = element.getAttribute(lengthAttr) - if val != '': - valLen = SVGLength(val) - if valLen.units != Unit.INVALID: # not an absolute/relative size or inherit, can be % though - newVal = scourLength(val) - if len(newVal) < len(val): - num += len(val) - len(newVal) - element.setAttribute(lengthAttr, newVal) - # repeat for attributes hidden in styles - if lengthAttr in styles.keys(): - val = styles[lengthAttr] - valLen = SVGLength(val) - if valLen.units != Unit.INVALID: - newVal = scourLength(val) - if len(newVal) < len(val): - num += len(val) - len(newVal) - styles[lengthAttr] = newVal - _setStyle(element, styles) + if len(sci) < len(nonsci): + return_value = sci - for child in element.childNodes: - if child.nodeType == 1: - num += reducePrecision(child) + return return_value - return num +def reducePrecision(element): + """ + Because opacities, letter spacings, stroke widths and all that don't need + to be preserved in SVG files with 9 digits of precision. + + Takes all of these attributes, in the given element node and its children, + and reduces their precision to the current Decimal context's precision. + Also checks for the attributes actually being lengths, not 'inherit', 'none' + or anything that isn't an SVGLength. + + Returns the number of bytes saved after performing these reductions. + """ + num = 0 + + styles = _getStyle(element) + for lengthAttr in ['opacity', 'flood-opacity', 'fill-opacity', + 'stroke-opacity', 'stop-opacity', 'stroke-miterlimit', + 'stroke-dashoffset', 'letter-spacing', 'word-spacing', + 'kerning', 'font-size-adjust', 'font-size', + 'stroke-width']: + val = element.getAttribute(lengthAttr) + if val != '': + valLen = SVGLength(val) + if valLen.units != Unit.INVALID: # not an absolute/relative size or inherit, can be % though + newVal = scourLength(val) + if len(newVal) < len(val): + num += len(val) - len(newVal) + element.setAttribute(lengthAttr, newVal) + # repeat for attributes hidden in styles + if lengthAttr in styles: + val = styles[lengthAttr] + valLen = SVGLength(val) + if valLen.units != Unit.INVALID: + newVal = scourLength(val) + if len(newVal) < len(val): + num += len(val) - len(newVal) + styles[lengthAttr] = newVal + _setStyle(element, styles) + + for child in element.childNodes: + if child.nodeType == Node.ELEMENT_NODE: + num += reducePrecision(child) + + return num def optimizeAngle(angle): - """ - Because any rotation can be expressed within 360 degrees - of any given number, and since negative angles sometimes - are one character longer than corresponding positive angle, - we shorten the number to one in the range to [-90, 270[. - """ - # First, we put the new angle in the range ]-360, 360[. - # The modulo operator yields results with the sign of the - # divisor, so for negative dividends, we preserve the sign - # of the angle. - if angle < 0: angle %= -360 - else: angle %= 360 - # 720 degrees is unneccessary, as 360 covers all angles. - # As "-x" is shorter than "35x" and "-xxx" one character - # longer than positive angles <= 260, we constrain angle - # range to [-90, 270[ (or, equally valid: ]-100, 260]). - if angle >= 270: angle -= 360 - elif angle < -90: angle += 360 - return angle - + """ + Because any rotation can be expressed within 360 degrees + of any given number, and since negative angles sometimes + are one character longer than corresponding positive angle, + we shorten the number to one in the range to [-90, 270[. + """ + # First, we put the new angle in the range ]-360, 360[. + # The modulo operator yields results with the sign of the + # divisor, so for negative dividends, we preserve the sign + # of the angle. + if angle < 0: + angle %= -360 + else: + angle %= 360 + # 720 degrees is unnecessary, as 360 covers all angles. + # As "-x" is shorter than "35x" and "-xxx" one character + # longer than positive angles <= 260, we constrain angle + # range to [-90, 270[ (or, equally valid: ]-100, 260]). + if angle >= 270: + angle -= 360 + elif angle < -90: + angle += 360 + return angle def optimizeTransform(transform): - """ - Optimises a series of transformations parsed from a single - transform="" attribute. + """ + Optimises a series of transformations parsed from a single + transform="" attribute. - The transformation list is modified in-place. - """ - # FIXME: reordering these would optimize even more cases: - # first: Fold consecutive runs of the same transformation - # extra: Attempt to cast between types to create sameness: - # "matrix(0 1 -1 0 0 0) rotate(180) scale(-1)" all - # are rotations (90, 180, 180) -- thus "rotate(90)" - # second: Simplify transforms where numbers are optional. - # third: Attempt to simplify any single remaining matrix() - # - # if there's only one transformation and it's a matrix, - # try to make it a shorter non-matrix transformation - # NOTE: as matrix(a b c d e f) in SVG means the matrix: - # |¯ a c e ¯| make constants |¯ A1 A2 A3 ¯| - # | b d f | translating them | B1 B2 B3 | - # |_ 0 0 1 _| to more readable |_ 0 0 1 _| - if len(transform) == 1 and transform[0][0] == 'matrix': - matrix = A1, B1, A2, B2, A3, B3 = transform[0][1] - # |¯ 1 0 0 ¯| - # | 0 1 0 | Identity matrix (no transformation) - # |_ 0 0 1 _| - if matrix == [1, 0, 0, 1, 0, 0]: - del transform[0] - # |¯ 1 0 X ¯| - # | 0 1 Y | Translation by (X, Y). - # |_ 0 0 1 _| - elif (A1 == 1 and A2 == 0 - and B1 == 0 and B2 == 1): - transform[0] = ('translate', [A3, B3]) - # |¯ X 0 0 ¯| - # | 0 Y 0 | Scaling by (X, Y). - # |_ 0 0 1 _| - elif ( A2 == 0 and A3 == 0 - and B1 == 0 and B3 == 0): - transform[0] = ('scale', [A1, B2]) - # |¯ cos(A) -sin(A) 0 ¯| Rotation by angle A, - # | sin(A) cos(A) 0 | clockwise, about the origin. - # |_ 0 0 1 _| A is in degrees, [-180...180]. - elif (A1 == B2 and -1 <= A1 <= 1 and A3 == 0 - and -B1 == A2 and -1 <= B1 <= 1 and B3 == 0 - # as cos² A + sin² A == 1 and as decimal trig is approximate: - # FIXME: the "epsilon" term here should really be some function - # of the precision of the (sin|cos)_A terms, not 1e-15: - and abs((B1 ** 2) + (A1 ** 2) - 1) < Decimal("1e-15")): - sin_A, cos_A = B1, A1 - # while asin(A) and acos(A) both only have an 180° range - # the sign of sin(A) and cos(A) varies across quadrants, - # letting us hone in on the angle the matrix represents: - # -- => < -90 | -+ => -90..0 | ++ => 0..90 | +- => >= 90 - # - # http://en.wikipedia.org/wiki/File:Sine_cosine_plot.svg - # shows asin has the correct angle the middle quadrants: - A = Decimal(str(math.degrees(math.asin(float(sin_A))))) - if cos_A < 0: # otherwise needs adjusting from the edges - if sin_A < 0: - A = -180 - A - else: - A = 180 - A - transform[0] = ('rotate', [A]) + The transformation list is modified in-place. + """ + # FIXME: reordering these would optimize even more cases: + # first: Fold consecutive runs of the same transformation + # extra: Attempt to cast between types to create sameness: + # "matrix(0 1 -1 0 0 0) rotate(180) scale(-1)" all + # are rotations (90, 180, 180) -- thus "rotate(90)" + # second: Simplify transforms where numbers are optional. + # third: Attempt to simplify any single remaining matrix() + # + # if there's only one transformation and it's a matrix, + # try to make it a shorter non-matrix transformation + # NOTE: as matrix(a b c d e f) in SVG means the matrix: + # |¯ a c e ¯| make constants |¯ A1 A2 A3 ¯| + # | b d f | translating them | B1 B2 B3 | + # |_ 0 0 1 _| to more readable |_ 0 0 1 _| + if len(transform) == 1 and transform[0][0] == 'matrix': + matrix = A1, B1, A2, B2, A3, B3 = transform[0][1] + # |¯ 1 0 0 ¯| + # | 0 1 0 | Identity matrix (no transformation) + # |_ 0 0 1 _| + if matrix == [1, 0, 0, 1, 0, 0]: + del transform[0] + # |¯ 1 0 X ¯| + # | 0 1 Y | Translation by (X, Y). + # |_ 0 0 1 _| + elif (A1 == 1 and A2 == 0 + and B1 == 0 and B2 == 1): + transform[0] = ('translate', [A3, B3]) + # |¯ X 0 0 ¯| + # | 0 Y 0 | Scaling by (X, Y). + # |_ 0 0 1 _| + elif (A2 == 0 and A3 == 0 + and B1 == 0 and B3 == 0): + transform[0] = ('scale', [A1, B2]) + # |¯ cos(A) -sin(A) 0 ¯| Rotation by angle A, + # | sin(A) cos(A) 0 | clockwise, about the origin. + # |_ 0 0 1 _| A is in degrees, [-180...180]. + elif (A1 == B2 and -1 <= A1 <= 1 and A3 == 0 + and -B1 == A2 and -1 <= B1 <= 1 and B3 == 0 + # as cos² A + sin² A == 1 and as decimal trig is approximate: + # FIXME: the "epsilon" term here should really be some function + # of the precision of the (sin|cos)_A terms, not 1e-15: + and abs((B1 ** 2) + (A1 ** 2) - 1) < Decimal("1e-15")): + sin_A, cos_A = B1, A1 + # while asin(A) and acos(A) both only have an 180° range + # the sign of sin(A) and cos(A) varies across quadrants, + # letting us hone in on the angle the matrix represents: + # -- => < -90 | -+ => -90..0 | ++ => 0..90 | +- => >= 90 + # + # http://en.wikipedia.org/wiki/File:Sine_cosine_plot.svg + # shows asin has the correct angle the middle quadrants: + A = Decimal(str(math.degrees(math.asin(float(sin_A))))) + if cos_A < 0: # otherwise needs adjusting from the edges + if sin_A < 0: + A = -180 - A + else: + A = 180 - A + transform[0] = ('rotate', [A]) - # Simplify transformations where numbers are optional. - for type, args in transform: - if type == 'translate': - # Only the X coordinate is required for translations. - # If the Y coordinate is unspecified, it's 0. - if len(args) == 2 and args[1] == 0: - del args[1] - elif type == 'rotate': - args[0] = optimizeAngle(args[0]) # angle - # Only the angle is required for rotations. - # If the coordinates are unspecified, it's the origin (0, 0). - if len(args) == 3 and args[1] == args[2] == 0: - del args[1:] - elif type == 'scale': - # Only the X scaling factor is required. - # If the Y factor is unspecified, it's the same as X. - if len(args) == 2 and args[0] == args[1]: - del args[1] + # Simplify transformations where numbers are optional. + for type, args in transform: + if type == 'translate': + # Only the X coordinate is required for translations. + # If the Y coordinate is unspecified, it's 0. + if len(args) == 2 and args[1] == 0: + del args[1] + elif type == 'rotate': + args[0] = optimizeAngle(args[0]) # angle + # Only the angle is required for rotations. + # If the coordinates are unspecified, it's the origin (0, 0). + if len(args) == 3 and args[1] == args[2] == 0: + del args[1:] + elif type == 'scale': + # Only the X scaling factor is required. + # If the Y factor is unspecified, it's the same as X. + if len(args) == 2 and args[0] == args[1]: + del args[1] - # Attempt to coalesce runs of the same transformation. - # Translations followed immediately by other translations, - # rotations followed immediately by other rotations, - # scaling followed immediately by other scaling, - # are safe to add. - # Identity skewX/skewY are safe to remove, but how do they accrete? - # |¯ 1 0 0 ¯| - # | tan(A) 1 0 | skews X coordinates by angle A - # |_ 0 0 1 _| - # - # |¯ 1 tan(A) 0 ¯| - # | 0 1 0 | skews Y coordinates by angle A - # |_ 0 0 1 _| - # - # FIXME: A matrix followed immediately by another matrix - # would be safe to multiply together, too. - i = 1 - while i < len(transform): - currType, currArgs = transform[i] - prevType, prevArgs = transform[i - 1] - if currType == prevType == 'translate': - prevArgs[0] += currArgs[0] # x - # for y, only add if the second translation has an explicit y - if len(currArgs) == 2: - if len(prevArgs) == 2: - prevArgs[1] += currArgs[1] # y - elif len(prevArgs) == 1: - prevArgs.append(currArgs[1]) # y - del transform[i] - if prevArgs[0] == prevArgs[1] == 0: - # Identity translation! - i -= 1 + # Attempt to coalesce runs of the same transformation. + # Translations followed immediately by other translations, + # rotations followed immediately by other rotations, + # scaling followed immediately by other scaling, + # are safe to add. + # Identity skewX/skewY are safe to remove, but how do they accrete? + # |¯ 1 0 0 ¯| + # | tan(A) 1 0 | skews X coordinates by angle A + # |_ 0 0 1 _| + # + # |¯ 1 tan(A) 0 ¯| + # | 0 1 0 | skews Y coordinates by angle A + # |_ 0 0 1 _| + # + # FIXME: A matrix followed immediately by another matrix + # would be safe to multiply together, too. + i = 1 + while i < len(transform): + currType, currArgs = transform[i] + prevType, prevArgs = transform[i - 1] + if currType == prevType == 'translate': + prevArgs[0] += currArgs[0] # x + # for y, only add if the second translation has an explicit y + if len(currArgs) == 2: + if len(prevArgs) == 2: + prevArgs[1] += currArgs[1] # y + elif len(prevArgs) == 1: + prevArgs.append(currArgs[1]) # y del transform[i] - elif (currType == prevType == 'rotate' - and len(prevArgs) == len(currArgs) == 1): - # Only coalesce if both rotations are from the origin. - prevArgs[0] = optimizeAngle(prevArgs[0] + currArgs[0]) - del transform[i] - elif currType == prevType == 'scale': - prevArgs[0] *= currArgs[0] # x - # handle an implicit y - if len(prevArgs) == 2 and len(currArgs) == 2: - # y1 * y2 - prevArgs[1] *= currArgs[1] - elif len(prevArgs) == 1 and len(currArgs) == 2: - # create y2 = uniformscalefactor1 * y2 - prevArgs.append(prevArgs[0] * currArgs[1]) - elif len(prevArgs) == 2 and len(currArgs) == 1: - # y1 * uniformscalefactor2 - prevArgs[1] *= currArgs[0] - del transform[i] - if prevArgs[0] == prevArgs[1] == 1: - # Identity scale! - i -= 1 + if prevArgs[0] == prevArgs[1] == 0: + # Identity translation! + i -= 1 + del transform[i] + elif (currType == prevType == 'rotate' + and len(prevArgs) == len(currArgs) == 1): + # Only coalesce if both rotations are from the origin. + prevArgs[0] = optimizeAngle(prevArgs[0] + currArgs[0]) del transform[i] - else: - i += 1 + elif currType == prevType == 'scale': + prevArgs[0] *= currArgs[0] # x + # handle an implicit y + if len(prevArgs) == 2 and len(currArgs) == 2: + # y1 * y2 + prevArgs[1] *= currArgs[1] + elif len(prevArgs) == 1 and len(currArgs) == 2: + # create y2 = uniformscalefactor1 * y2 + prevArgs.append(prevArgs[0] * currArgs[1]) + elif len(prevArgs) == 2 and len(currArgs) == 1: + # y1 * uniformscalefactor2 + prevArgs[1] *= currArgs[0] + del transform[i] + # if prevArgs is [1] or [1, 1], then it is effectively an + # identity matrix and can be removed. + if prevArgs[0] == 1 and (len(prevArgs) == 1 or prevArgs[1] == 1): + # Identity scale! + i -= 1 + del transform[i] + else: + i += 1 - # Some fixups are needed for single-element transformation lists, since - # the loop above was to coalesce elements with their predecessors in the - # list, and thus it required 2 elements. - i = 0 - while i < len(transform): - currType, currArgs = transform[i] - if ((currType == 'skewX' or currType == 'skewY') - and len(currArgs) == 1 and currArgs[0] == 0): - # Identity skew! - del transform[i] - elif ((currType == 'rotate') - and len(currArgs) == 1 and currArgs[0] == 0): - # Identity rotation! - del transform[i] - else: - i += 1 + # Some fixups are needed for single-element transformation lists, since + # the loop above was to coalesce elements with their predecessors in the + # list, and thus it required 2 elements. + i = 0 + while i < len(transform): + currType, currArgs = transform[i] + if ((currType == 'skewX' or currType == 'skewY') + and len(currArgs) == 1 and currArgs[0] == 0): + # Identity skew! + del transform[i] + elif ((currType == 'rotate') + and len(currArgs) == 1 and currArgs[0] == 0): + # Identity rotation! + del transform[i] + else: + i += 1 +def optimizeTransforms(element, options): + """ + Attempts to optimise transform specifications on the given node and its children. -def optimizeTransforms(element, options) : - """ - Attempts to optimise transform specifications on the given node and its children. + Returns the number of bytes saved after performing these reductions. + """ + num = 0 - Returns the number of bytes saved after performing these reductions. - """ - num = 0 + for transformAttr in ['transform', 'patternTransform', 'gradientTransform']: + val = element.getAttribute(transformAttr) + if val != '': + transform = svg_transform_parser.parse(val) - for transformAttr in ['transform', 'patternTransform', 'gradientTransform']: - val = element.getAttribute(transformAttr) - if val != '': - transform = svg_transform_parser.parse(val) + optimizeTransform(transform) - optimizeTransform(transform) + newVal = serializeTransform(transform) - newVal = serializeTransform(transform) + if len(newVal) < len(val): + if len(newVal): + element.setAttribute(transformAttr, newVal) + else: + element.removeAttribute(transformAttr) + num += len(val) - len(newVal) - if len(newVal) < len(val): - if len(newVal): - element.setAttribute(transformAttr, newVal) - else: - element.removeAttribute(transformAttr) - num += len(val) - len(newVal) + for child in element.childNodes: + if child.nodeType == Node.ELEMENT_NODE: + num += optimizeTransforms(child, options) - for child in element.childNodes: - if child.nodeType == 1: - num += optimizeTransforms(child, options) - - return num + return num +def remove_comments(element, stats): + """ + Removes comments from the element and its children. + """ -def removeComments(element) : - """ - Removes comments from the element and its children. - """ - global numCommentBytes - - if isinstance(element, xml.dom.minidom.Document): - # must process the document object separately, because its - # documentElement's nodes have None as their parentNode - for subelement in element.childNodes: - if isinstance(element, xml.dom.minidom.Comment): - numCommentBytes += len(element.data) - element.documentElement.removeChild(subelement) - else: - removeComments(subelement) - elif isinstance(element, xml.dom.minidom.Comment): - numCommentBytes += len(element.data) - element.parentNode.removeChild(element) - else: - for subelement in element.childNodes: - removeComments(subelement) + if isinstance(element, xml.dom.minidom.Comment): + stats.num_bytes_saved_in_comments += len(element.data) + stats.num_comments_removed += 1 + element.parentNode.removeChild(element) + else: + for subelement in element.childNodes[:]: + remove_comments(subelement, stats) - -def embedRasters(element, options) : - import base64 - import urllib - """ +def embed_rasters(element, options): + import base64 + """ Converts raster references to inline images. NOTE: there are size limits to base64-encoding handling in browsers - """ - global numRastersEmbedded + """ + num_rasters_embedded = 0 - href = element.getAttributeNS(NS['XLINK'],'href') + href = element.getAttributeNS(NS['XLINK'], 'href') - # if xlink:href is set, then grab the id - if href != '' and len(href) > 1: - # find if href value has filename ext - ext = os.path.splitext(os.path.basename(href))[1].lower()[1:] + # if xlink:href is set, then grab the id + if href != '' and len(href) > 1: + ext = os.path.splitext(os.path.basename(href))[1].lower()[1:] - # look for 'png', 'jpg', and 'gif' extensions - if ext == 'png' or ext == 'jpg' or ext == 'gif': + # only operate on files with 'png', 'jpg', and 'gif' file extensions + if ext in ['png', 'jpg', 'gif']: + # fix common issues with file paths + # TODO: should we warn the user instead of trying to correct those invalid URIs? + # convert backslashes to slashes + href_fixed = href.replace('\\', '/') + # absolute 'file:' URIs have to use three slashes (unless specifying a host which I've never seen) + href_fixed = re.sub('file:/+', 'file:///', href_fixed) - # file:// URLs denote files on the local system too - if href[:7] == 'file://': - href = href[7:] - # does the file exist? - if os.path.isfile(href): - # if this is not an absolute path, set path relative - # to script file based on input arg - infilename = '.' - if options.infilename: infilename = options.infilename - href = os.path.join(os.path.dirname(infilename), href) + # parse the URI to get scheme and path + # in principle it would make sense to work only with this ParseResult and call 'urlunparse()' in the end + # however 'urlunparse(urlparse(file:raster.png))' -> 'file:///raster.png' which is nonsense + parsed_href = urllib.parse.urlparse(href_fixed) - rasterdata = '' - # test if file exists locally - if os.path.isfile(href): - # open raster file as raw binary - raster = open( href, "rb") - rasterdata = raster.read() - elif href[:7] == 'http://': - webFile = urllib.urlopen( href ) - rasterdata = webFile.read() - webFile.close() + # assume locations without protocol point to local files (and should use the 'file:' protocol) + if parsed_href.scheme == '': + parsed_href = parsed_href._replace(scheme='file') + if href_fixed[0] == '/': + href_fixed = 'file://' + href_fixed + else: + href_fixed = 'file:' + href_fixed - # ... should we remove all images which don't resolve? - if rasterdata != '' : - # base64-encode raster - b64eRaster = base64.b64encode( rasterdata ) + # relative local paths are relative to the input file, therefore temporarily change the working dir + working_dir_old = None + if parsed_href.scheme == 'file' and parsed_href.path[0] != '/': + if options.infilename: + working_dir_old = os.getcwd() + working_dir_new = os.path.abspath(os.path.dirname(options.infilename)) + os.chdir(working_dir_new) - # set href attribute to base64-encoded equivalent - if b64eRaster != '': - # PNG and GIF both have MIME Type 'image/[ext]', but - # JPEG has MIME Type 'image/jpeg' - if ext == 'jpg': - ext = 'jpeg' + # open/download the file + try: + file = urllib.request.urlopen(href_fixed) + rasterdata = file.read() + file.close() + except Exception as e: + print("WARNING: Could not open file '" + href + "' for embedding. " + "The raster image will be kept as a reference but might be invalid. " + "(Exception details: " + str(e) + ")", file=options.ensure_value("stdout", sys.stdout)) + rasterdata = '' + finally: + # always restore initial working directory if we changed it above + if working_dir_old is not None: + os.chdir(working_dir_old) - element.setAttributeNS(NS['XLINK'], 'href', 'data:image/' + ext + ';base64,' + b64eRaster) - numRastersEmbedded += 1 - del b64eRaster + # TODO: should we remove all images which don't resolve? + # then we also have to consider unreachable remote locations (i.e. if there is no internet connection) + if rasterdata != '': + # base64-encode raster + b64eRaster = base64.b64encode(rasterdata) + # set href attribute to base64-encoded equivalent + if b64eRaster != '': + # PNG and GIF both have MIME Type 'image/[ext]', but + # JPEG has MIME Type 'image/jpeg' + if ext == 'jpg': + ext = 'jpeg' + + element.setAttributeNS(NS['XLINK'], 'href', + 'data:image/' + ext + ';base64,' + b64eRaster.decode()) + num_rasters_embedded += 1 + del b64eRaster + return num_rasters_embedded def properlySizeDoc(docElement, options): - # get doc width and height - w = SVGLength(docElement.getAttribute('width')) - h = SVGLength(docElement.getAttribute('height')) + # get doc width and height + w = SVGLength(docElement.getAttribute('width')) + h = SVGLength(docElement.getAttribute('height')) - # if width/height are not unitless or px then it is not ok to rewrite them into a viewBox. - # well, it may be OK for Web browsers and vector editors, but not for librsvg. - if options.renderer_workaround: - if ((w.units != Unit.NONE and w.units != Unit.PX) or - (h.units != Unit.NONE and h.units != Unit.PX)): - return - - # else we have a statically sized image and we should try to remedy that - - # parse viewBox attribute - vbSep = re.split("\\s*\\,?\\s*", docElement.getAttribute('viewBox'), 3) - # if we have a valid viewBox we need to check it - vbWidth,vbHeight = 0,0 - if len(vbSep) == 4: - try: - # if x or y are specified and non-zero then it is not ok to overwrite it - vbX = float(vbSep[0]) - vbY = float(vbSep[1]) - if vbX != 0 or vbY != 0: + # if width/height are not unitless or px then it is not ok to rewrite them into a viewBox. + # well, it may be OK for Web browsers and vector editors, but not for librsvg. + if options.renderer_workaround: + if ((w.units != Unit.NONE and w.units != Unit.PX) or + (h.units != Unit.NONE and h.units != Unit.PX)): return - # if width or height are not equal to doc width/height then it is not ok to overwrite it - vbWidth = float(vbSep[2]) - vbHeight = float(vbSep[3]) - if vbWidth != w.value or vbHeight != h.value: - return - # if the viewBox did not parse properly it is invalid and ok to overwrite it - except ValueError: - pass + # else we have a statically sized image and we should try to remedy that - # at this point it's safe to set the viewBox and remove width/height - docElement.setAttribute('viewBox', '0 0 %s %s' % (w.value, h.value)) - docElement.removeAttribute('width') - docElement.removeAttribute('height') + # parse viewBox attribute + vbSep = RE_COMMA_WSP.split(docElement.getAttribute('viewBox')) + # if we have a valid viewBox we need to check it + if len(vbSep) == 4: + try: + # if x or y are specified and non-zero then it is not ok to overwrite it + vbX = float(vbSep[0]) + vbY = float(vbSep[1]) + if vbX != 0 or vbY != 0: + return + # if width or height are not equal to doc width/height then it is not ok to overwrite it + vbWidth = float(vbSep[2]) + vbHeight = float(vbSep[3]) + if vbWidth != w.value or vbHeight != h.value: + return + # if the viewBox did not parse properly it is invalid and ok to overwrite it + except ValueError: + pass + + # at this point it's safe to set the viewBox and remove width/height + docElement.setAttribute('viewBox', '0 0 %s %s' % (w.value, h.value)) + docElement.removeAttribute('width') + docElement.removeAttribute('height') def remapNamespacePrefix(node, oldprefix, newprefix): - if node == None or node.nodeType != 1: return + if node is None or node.nodeType != Node.ELEMENT_NODE: + return - if node.prefix == oldprefix: - localName = node.localName - namespace = node.namespaceURI - doc = node.ownerDocument - parent = node.parentNode + if node.prefix == oldprefix: + localName = node.localName + namespace = node.namespaceURI + doc = node.ownerDocument + parent = node.parentNode - # create a replacement node - newNode = None - if newprefix != '': - newNode = doc.createElementNS(namespace, newprefix+":"+localName) - else: - newNode = doc.createElement(localName); + # create a replacement node + if newprefix != '': + newNode = doc.createElementNS(namespace, newprefix + ":" + localName) + else: + newNode = doc.createElement(localName) - # add all the attributes - attrList = node.attributes - for i in xrange(attrList.length): - attr = attrList.item(i) - newNode.setAttributeNS( attr.namespaceURI, attr.localName, attr.nodeValue) + # add all the attributes + attrList = node.attributes + for i in range(attrList.length): + attr = attrList.item(i) + newNode.setAttributeNS(attr.namespaceURI, attr.name, attr.nodeValue) - # clone and add all the child nodes - for child in node.childNodes: - newNode.appendChild(child.cloneNode(True)) + # clone and add all the child nodes + for child in node.childNodes: + newNode.appendChild(child.cloneNode(True)) - # replace old node with new node - parent.replaceChild( newNode, node ) - # set the node to the new node in the remapped namespace prefix - node = newNode + # replace old node with new node + parent.replaceChild(newNode, node) + # set the node to the new node in the remapped namespace prefix + node = newNode - # now do all child nodes - for child in node.childNodes : - remapNamespacePrefix(child, oldprefix, newprefix) + # now do all child nodes + for child in node.childNodes: + remapNamespacePrefix(child, oldprefix, newprefix) +def make_well_formed(text, quote_dict=None): + if quote_dict is None: + quote_dict = XML_ENTS_NO_QUOTES + if not any(c in text for c in quote_dict): + # The quote-able characters are quite rare in SVG (they mostly only + # occur in text elements in practice). Therefore it make sense to + # optimize for this common case + return text + return ''.join(quote_dict[c] if c in quote_dict else c for c in text) -def makeWellFormed(str): - xml_ents = { '<':'<', '>':'>', '&':'&', "'":''', '"':'"'} -# starr = [] -# for c in str: -# if c in xml_ents: -# starr.append(xml_ents[c]) -# else: -# starr.append(c) +def choose_quote_character(value): + quot_count = value.count('"') + if quot_count == 0 or quot_count <= value.count("'"): + # Fewest "-symbols (if there are 0, we pick this to avoid spending + # time counting the '-symbols as it won't matter) + quote = '"' + xml_ent = XML_ENTS_ESCAPE_QUOT + else: + quote = "'" + xml_ent = XML_ENTS_ESCAPE_APOS + return quote, xml_ent - # this list comprehension is short-form for the above for-loop: - return ''.join([xml_ents[c] if c in xml_ents else c for c in str]) +TEXT_CONTENT_ELEMENTS = ['text', 'tspan', 'tref', 'textPath', 'altGlyph', + 'flowDiv', 'flowPara', 'flowSpan', 'flowTref', 'flowLine'] + + +KNOWN_ATTRS = [ + # TODO: Maybe update with full list from https://www.w3.org/TR/SVG/attindex.html + # (but should be kept intuitively ordered) + 'id', 'xml:id', 'class', + 'transform', + 'x', 'y', 'z', 'width', 'height', 'x1', 'x2', 'y1', 'y2', + 'dx', 'dy', 'rotate', 'startOffset', 'method', 'spacing', + 'cx', 'cy', 'r', 'rx', 'ry', 'fx', 'fy', + 'd', 'points', + ] + sorted(svgAttributes) + [ + 'style', + ] + +KNOWN_ATTRS_ORDER_BY_NAME = defaultdict(lambda: len(KNOWN_ATTRS), + {name: order for order, name in enumerate(KNOWN_ATTRS)}) + + +# use custom order for known attributes and alphabetical order for the rest +def _attribute_sort_key_function(attribute): + name = attribute.name + order_value = KNOWN_ATTRS_ORDER_BY_NAME[name] + return order_value, name + + +def attributes_ordered_for_output(element): + if not element.hasAttributes(): + return [] + attribute = element.attributes + # The .item(i) call is painfully slow (bpo#40689). Therefore we ensure we + # call it at most once per attribute. + # - it would be many times faster to use `attribute.values()` but sadly + # that is an "experimental" interface. + return sorted((attribute.item(i) for i in range(attribute.length)), + key=_attribute_sort_key_function) # hand-rolled serialization function that has the following benefits: # - pretty printing # - somewhat judicious use of whitespace # - ensure id attributes are first -def serializeXML(element, options, ind = 0, preserveWhitespace = False): - outParts = [] +def serializeXML(element, options, indent_depth=0, preserveWhitespace=False): + outParts = [] - indent = ind - I='' - if options.indent_type == 'tab': I='\t' - elif options.indent_type == 'space': I=' ' + indent_type = '' + newline = '' + if options.newlines: + if options.indent_type == 'tab': + indent_type = '\t' + elif options.indent_type == 'space': + indent_type = ' ' + indent_type *= options.indent_depth + newline = '\n' - outParts.extend([(I * ind), '<', element.nodeName]) + outParts.extend([(indent_type * indent_depth), '<', element.nodeName]) - # always serialize the id or xml:id attributes first - if element.getAttribute('id') != '': - id = element.getAttribute('id') - quot = '"' - if id.find('"') != -1: - quot = "'" - outParts.extend([' id=', quot, id, quot]) - if element.getAttribute('xml:id') != '': - id = element.getAttribute('xml:id') - quot = '"' - if id.find('"') != -1: - quot = "'" - outParts.extend([' xml:id=', quot, id, quot]) + # now serialize the other attributes + attrs = attributes_ordered_for_output(element) + for attr in attrs: + attrValue = attr.nodeValue + quote, xml_ent = choose_quote_character(attrValue) + attrValue = make_well_formed(attrValue, xml_ent) - # now serialize the other attributes - attrList = element.attributes - for num in xrange(attrList.length) : - attr = attrList.item(num) - if attr.nodeName == 'id' or attr.nodeName == 'xml:id': continue - # if the attribute value contains a double-quote, use single-quotes - quot = '"' - if attr.nodeValue.find('"') != -1: - quot = "'" + if attr.nodeName == 'style': + # sort declarations + attrValue = ';'.join(sorted(attrValue.split(';'))) - attrValue = makeWellFormed( attr.nodeValue ) + outParts.append(' ') + # preserve xmlns: if it is a namespace prefix declaration + if attr.prefix is not None: + outParts.extend([attr.prefix, ':']) + elif attr.namespaceURI is not None: + if attr.namespaceURI == 'http://www.w3.org/2000/xmlns/' and attr.nodeName.find('xmlns') == -1: + outParts.append('xmlns:') + elif attr.namespaceURI == 'http://www.w3.org/1999/xlink': + outParts.append('xlink:') + outParts.extend([attr.localName, '=', quote, attrValue, quote]) - outParts.append(' ') - # preserve xmlns: if it is a namespace prefix declaration - if attr.prefix != None: - outParts.extend([attr.prefix, ':']) - elif attr.namespaceURI != None: - if attr.namespaceURI == 'http://www.w3.org/2000/xmlns/' and attr.nodeName.find('xmlns') == -1: - outParts.append('xmlns:') - elif attr.namespaceURI == 'http://www.w3.org/1999/xlink': - outParts.append('xlink:') - outParts.extend([attr.localName, '=', quot, attrValue, quot]) + if attr.nodeName == 'xml:space': + if attrValue == 'preserve': + preserveWhitespace = True + elif attrValue == 'default': + preserveWhitespace = False - if attr.nodeName == 'xml:space': - if attrValue == 'preserve': - preserveWhitespace = True - elif attrValue == 'default': - preserveWhitespace = False + children = element.childNodes + if children.length == 0: + outParts.append('/>') + else: + outParts.append('>') - # if no children, self-close - children = element.childNodes - if children.length > 0: - outParts.append('>') + onNewLine = False + for child in element.childNodes: + # element node + if child.nodeType == Node.ELEMENT_NODE: + # do not indent inside text content elements as in SVG there's a difference between + # "text1\ntext2" and + # "text1\n text2" + # see https://www.w3.org/TR/SVG/text.html#WhiteSpace + if preserveWhitespace or element.nodeName in TEXT_CONTENT_ELEMENTS: + outParts.append(serializeXML(child, options, 0, preserveWhitespace)) + else: + outParts.extend([newline, serializeXML(child, options, indent_depth + 1, preserveWhitespace)]) + onNewLine = True + # text node + elif child.nodeType == Node.TEXT_NODE: + text_content = child.nodeValue + if not preserveWhitespace: + # strip / consolidate whitespace according to spec, see + # https://www.w3.org/TR/SVG/text.html#WhiteSpace + if element.nodeName in TEXT_CONTENT_ELEMENTS: + text_content = text_content.replace('\n', '') + text_content = text_content.replace('\t', ' ') + if child == element.firstChild: + text_content = text_content.lstrip() + elif child == element.lastChild: + text_content = text_content.rstrip() + while ' ' in text_content: + text_content = text_content.replace(' ', ' ') + else: + text_content = text_content.strip() + outParts.append(make_well_formed(text_content)) + # CDATA node + elif child.nodeType == Node.CDATA_SECTION_NODE: + outParts.extend(['<![CDATA[', child.nodeValue, ']]>']) + # Comment node + elif child.nodeType == Node.COMMENT_NODE: + outParts.extend([newline, indent_type * (indent_depth+1), '<!--', child.nodeValue, '-->']) + # TODO: entities, processing instructions, what else? + else: # ignore the rest + pass - onNewLine = False - for child in element.childNodes: - # element node - if child.nodeType == 1: - if preserveWhitespace: - outParts.append(serializeXML(child, options, 0, preserveWhitespace)) - else: - outParts.extend(['\n', serializeXML(child, options, indent + 1, preserveWhitespace)]) - onNewLine = True - # text node - elif child.nodeType == 3: - # trim it only in the case of not being a child of an element - # where whitespace might be important - if preserveWhitespace: - outParts.append(makeWellFormed(child.nodeValue)) - else: - outParts.append(makeWellFormed(child.nodeValue.strip())) - # CDATA node - elif child.nodeType == 4: - outParts.extend(['<![CDATA[', child.nodeValue, ']]>']) - # Comment node - elif child.nodeType == 8: - outParts.extend(['<!--', child.nodeValue, '-->']) - # TODO: entities, processing instructions, what else? - else: # ignore the rest - pass - - if onNewLine: outParts.append(I * ind) - outParts.extend(['</', element.nodeName, '>']) - if indent > 0: outParts.append('\n') - else: - outParts.append('/>') - if indent > 0: outParts.append('\n') - - return "".join(outParts) + if onNewLine: + outParts.append(newline) + outParts.append(indent_type * indent_depth) + outParts.extend(['</', element.nodeName, '>']) + return "".join(outParts) # this is the main method # input is a string representation of the input XML # returns a string representation of the output XML -def scourString(in_string, options=None): - if options is None: - options = _options_parser.get_default_values() - getcontext().prec = options.digits - global numAttrsRemoved - global numStylePropsFixed - global numElemsRemoved - global numBytesSavedInColors - global numCommentsRemoved - global numBytesSavedInIDs - global numBytesSavedInLengths - global numBytesSavedInTransforms - doc = xml.dom.minidom.parseString(in_string) +def scourString(in_string, options=None, stats=None): + # sanitize options (take missing attributes from defaults, discard unknown attributes) + options = sanitizeOptions(options) - # for whatever reason this does not always remove all inkscape/sodipodi attributes/elements - # on the first pass, so we do it multiple times - # does it have to do with removal of children affecting the childlist? - if options.keep_editor_data == False: - while removeNamespacedElements( doc.documentElement, unwanted_ns ) > 0 : - pass - while removeNamespacedAttributes( doc.documentElement, unwanted_ns ) > 0 : - pass + if stats is None: + # This is easier than doing "if stats is not None:" checks all over the place + stats = ScourStats() - # remove the xmlns: declarations now - xmlnsDeclsToRemove = [] - attrList = doc.documentElement.attributes - for num in xrange(attrList.length) : - if attrList.item(num).nodeValue in unwanted_ns : - xmlnsDeclsToRemove.append(attrList.item(num).nodeName) + # default or invalid value + if(options.cdigits < 0): + options.cdigits = options.digits - for attr in xmlnsDeclsToRemove : - doc.documentElement.removeAttribute(attr) - numAttrsRemoved += 1 + # create decimal contexts with reduced precision for scouring numbers + # calculations should be done in the default context (precision defaults to 28 significant digits) + # to minimize errors + global scouringContext + global scouringContextC # even more reduced precision for control points + scouringContext = Context(prec=options.digits) + scouringContextC = Context(prec=options.cdigits) - # ensure namespace for SVG is declared - # TODO: what if the default namespace is something else (i.e. some valid namespace)? - if doc.documentElement.getAttribute('xmlns') != 'http://www.w3.org/2000/svg': - doc.documentElement.setAttribute('xmlns', 'http://www.w3.org/2000/svg') - # TODO: throw error or warning? + doc = xml.dom.minidom.parseString(in_string) - # check for redundant SVG namespace declaration - attrList = doc.documentElement.attributes - xmlnsDeclsToRemove = [] - redundantPrefixes = [] - for i in xrange(attrList.length): - attr = attrList.item(i) - name = attr.nodeName - val = attr.nodeValue - if name[0:6] == 'xmlns:' and val == 'http://www.w3.org/2000/svg': - redundantPrefixes.append(name[6:]) - xmlnsDeclsToRemove.append(name) + # determine number of flowRoot elements in input document + # flowRoot elements don't render at all on current browsers (04/2016) + cnt_flowText_el = len(doc.getElementsByTagName('flowRoot')) + if cnt_flowText_el: + errmsg = "SVG input document uses {} flow text elements, " \ + "which won't render on browsers!".format(cnt_flowText_el) + if options.error_on_flowtext: + raise Exception(errmsg) + else: + print("WARNING: {}".format(errmsg), file=sys.stderr) - for attrName in xmlnsDeclsToRemove: - doc.documentElement.removeAttribute(attrName) + # remove descriptive elements + stats.num_elements_removed += remove_descriptive_elements(doc, options) - for prefix in redundantPrefixes: - remapNamespacePrefix(doc.documentElement, prefix, '') + # remove unneeded namespaced elements/attributes added by common editors + if options.keep_editor_data is False: + stats.num_elements_removed += removeNamespacedElements(doc.documentElement, + unwanted_ns) + stats.num_attributes_removed += removeNamespacedAttributes(doc.documentElement, + unwanted_ns) - if options.strip_comments: - numCommentsRemoved = removeComments(doc) + # remove the xmlns: declarations now + xmlnsDeclsToRemove = [] + attrList = doc.documentElement.attributes + for index in range(attrList.length): + if attrList.item(index).nodeValue in unwanted_ns: + xmlnsDeclsToRemove.append(attrList.item(index).nodeName) - # repair style (remove unnecessary style properties and change them into XML attributes) - numStylePropsFixed = repairStyle(doc.documentElement, options) + for attr in xmlnsDeclsToRemove: + doc.documentElement.removeAttribute(attr) + stats.num_attributes_removed += len(xmlnsDeclsToRemove) - # convert colors to #RRGGBB format - if options.simple_colors: - numBytesSavedInColors = convertColors(doc.documentElement) + # ensure namespace for SVG is declared + # TODO: what if the default namespace is something else (i.e. some valid namespace)? + if doc.documentElement.getAttribute('xmlns') != 'http://www.w3.org/2000/svg': + doc.documentElement.setAttribute('xmlns', 'http://www.w3.org/2000/svg') + # TODO: throw error or warning? - # remove <metadata> if the user wants to - if options.remove_metadata: - removeMetadataElements(doc) + # check for redundant and unused SVG namespace declarations + def xmlnsUnused(prefix, namespace): + if doc.getElementsByTagNameNS(namespace, "*"): + return False + else: + for element in doc.getElementsByTagName("*"): + for attribute in element.attributes.values(): + if attribute.name.startswith(prefix): + return False + return True - # remove unreferenced gradients/patterns outside of defs - # and most unreferenced elements inside of defs - while removeUnreferencedElements(doc) > 0: - pass + attrList = doc.documentElement.attributes + xmlnsDeclsToRemove = [] + redundantPrefixes = [] + for i in range(attrList.length): + attr = attrList.item(i) + name = attr.nodeName + val = attr.nodeValue + if name[0:6] == 'xmlns:': + if val == 'http://www.w3.org/2000/svg': + redundantPrefixes.append(name[6:]) + xmlnsDeclsToRemove.append(name) + elif xmlnsUnused(name[6:], val): + xmlnsDeclsToRemove.append(name) - # remove empty defs, metadata, g - # NOTE: these elements will be removed if they just have whitespace-only text nodes - for tag in ['defs', 'metadata', 'g'] : - for elem in doc.documentElement.getElementsByTagName(tag) : - removeElem = not elem.hasChildNodes() - if removeElem == False : - for child in elem.childNodes : - if child.nodeType in [1, 4, 8]: - break - elif child.nodeType == 3 and not child.nodeValue.isspace(): - break - else: - removeElem = True - if removeElem : + for attrName in xmlnsDeclsToRemove: + doc.documentElement.removeAttribute(attrName) + stats.num_attributes_removed += len(xmlnsDeclsToRemove) + + for prefix in redundantPrefixes: + remapNamespacePrefix(doc.documentElement, prefix, '') + + if options.strip_comments: + remove_comments(doc, stats) + + if options.strip_xml_space_attribute and doc.documentElement.hasAttribute('xml:space'): + doc.documentElement.removeAttribute('xml:space') + stats.num_attributes_removed += 1 + + # repair style (remove unnecessary style properties and change them into XML attributes) + stats.num_style_properties_fixed = repairStyle(doc.documentElement, options) + + # convert colors to #RRGGBB format + if options.simple_colors: + stats.num_bytes_saved_in_colors = convertColors(doc.documentElement) + + # remove unreferenced gradients/patterns outside of defs + # and most unreferenced elements inside of defs + while remove_unreferenced_elements(doc, options.keep_defs, stats) > 0: + pass + + # remove empty defs, metadata, g + # NOTE: these elements will be removed if they just have whitespace-only text nodes + for tag in ['defs', 'title', 'desc', 'metadata', 'g']: + for elem in doc.documentElement.getElementsByTagName(tag): + removeElem = not elem.hasChildNodes() + if removeElem is False: + for child in elem.childNodes: + if child.nodeType in [Node.ELEMENT_NODE, Node.CDATA_SECTION_NODE, Node.COMMENT_NODE]: + break + elif child.nodeType == Node.TEXT_NODE and not child.nodeValue.isspace(): + break + else: + removeElem = True + if removeElem: + elem.parentNode.removeChild(elem) + stats.num_elements_removed += 1 + + if options.strip_ids: + referencedIDs = findReferencedElements(doc.documentElement) + identifiedElements = unprotected_ids(doc, options) + stats.num_ids_removed += remove_unreferenced_ids(referencedIDs, + identifiedElements) + + while remove_duplicate_gradient_stops(doc, stats) > 0: + pass + + # remove gradients that are only referenced by one other gradient + while collapse_singly_referenced_gradients(doc, stats) > 0: + pass + + # remove duplicate gradients + stats.num_elements_removed += removeDuplicateGradients(doc) + + if options.group_collapse: + stats.num_elements_removed += mergeSiblingGroupsWithCommonAttributes(doc.documentElement) + # create <g> elements if there are runs of elements with the same attributes. + # this MUST be before moveCommonAttributesToParentGroup. + if options.group_create: + create_groups_for_common_attributes(doc.documentElement, stats) + + # move common attributes to parent group + # NOTE: the if the <svg> element's immediate children + # all have the same value for an attribute, it must not + # get moved to the <svg> element. The <svg> element + # doesn't accept fill=, stroke= etc.! + referencedIds = findReferencedElements(doc.documentElement) + for child in doc.documentElement.childNodes: + stats.num_attributes_removed += moveCommonAttributesToParentGroup(child, referencedIds) + + # remove unused attributes from parent + stats.num_attributes_removed += removeUnusedAttributesOnParent(doc.documentElement) + + # Collapse groups LAST, because we've created groups. If done before + # moveAttributesToParentGroup, empty <g>'s may remain. + if options.group_collapse: + while remove_nested_groups(doc.documentElement, stats) > 0: + pass + + # remove unnecessary closing point of polygons and scour points + for polygon in doc.documentElement.getElementsByTagName('polygon'): + stats.num_points_removed_from_polygon += clean_polygon(polygon, options) + + # scour points of polyline + for polyline in doc.documentElement.getElementsByTagName('polyline'): + cleanPolyline(polyline, options) + + # clean path data + for elem in doc.documentElement.getElementsByTagName('path'): + if elem.getAttribute('d') == '': elem.parentNode.removeChild(elem) - numElemsRemoved += 1 + else: + clean_path(elem, options, stats) - if options.strip_ids: - bContinueLooping = True - while bContinueLooping: - identifiedElements = unprotected_ids(doc, options) - referencedIDs = findReferencedElements(doc.documentElement) - bContinueLooping = (removeUnreferencedIDs(referencedIDs, identifiedElements) > 0) + # shorten ID names as much as possible + if options.shorten_ids: + stats.num_bytes_saved_in_ids += shortenIDs(doc, options.shorten_ids_prefix, options) - while removeDuplicateGradientStops(doc) > 0: - pass + # scour lengths (including coordinates) + for type in ['svg', 'image', 'rect', 'circle', 'ellipse', 'line', + 'linearGradient', 'radialGradient', 'stop', 'filter']: + for elem in doc.getElementsByTagName(type): + for attr in ['x', 'y', 'width', 'height', 'cx', 'cy', 'r', 'rx', 'ry', + 'x1', 'y1', 'x2', 'y2', 'fx', 'fy', 'offset']: + if elem.getAttribute(attr) != '': + elem.setAttribute(attr, scourLength(elem.getAttribute(attr))) + viewBox = doc.documentElement.getAttribute('viewBox') + if viewBox: + lengths = RE_COMMA_WSP.split(viewBox) + lengths = [scourUnitlessLength(length) for length in lengths] + doc.documentElement.setAttribute('viewBox', ' '.join(lengths)) - # remove gradients that are only referenced by one other gradient - while collapseSinglyReferencedGradients(doc) > 0: - pass + # more length scouring in this function + stats.num_bytes_saved_in_lengths = reducePrecision(doc.documentElement) - # remove duplicate gradients - while removeDuplicateGradients(doc) > 0: - pass + # remove default values of attributes + stats.num_attributes_removed += removeDefaultAttributeValues(doc.documentElement, options) - # create <g> elements if there are runs of elements with the same attributes. - # this MUST be before moveCommonAttributesToParentGroup. - if options.group_create: - createGroupsForCommonAttributes(doc.documentElement) + # reduce the length of transformation attributes + stats.num_bytes_saved_in_transforms = optimizeTransforms(doc.documentElement, options) - # move common attributes to parent group - # NOTE: the if the <svg> element's immediate children - # all have the same value for an attribute, it must not - # get moved to the <svg> element. The <svg> element - # doesn't accept fill=, stroke= etc.! - referencedIds = findReferencedElements(doc.documentElement) - for child in doc.documentElement.childNodes: - numAttrsRemoved += moveCommonAttributesToParentGroup(child, referencedIds) + # convert rasters references to base64-encoded strings + if options.embed_rasters: + for elem in doc.documentElement.getElementsByTagName('image'): + stats.num_rasters_embedded += embed_rasters(elem, options) - # remove unused attributes from parent - numAttrsRemoved += removeUnusedAttributesOnParent(doc.documentElement) + # properly size the SVG document (ideally width/height should be 100% with a viewBox) + if options.enable_viewboxing: + properlySizeDoc(doc.documentElement, options) - # Collapse groups LAST, because we've created groups. If done before - # moveAttributesToParentGroup, empty <g>'s may remain. - if options.group_collapse: - while removeNestedGroups(doc.documentElement) > 0: - pass - - # remove unnecessary closing point of polygons and scour points - for polygon in doc.documentElement.getElementsByTagName('polygon') : - cleanPolygon(polygon, options) - - # scour points of polyline - for polyline in doc.documentElement.getElementsByTagName('polyline') : - cleanPolyline(polyline, options) - - # clean path data - for elem in doc.documentElement.getElementsByTagName('path') : - if elem.getAttribute('d') == '': - elem.parentNode.removeChild(elem) - else: - cleanPath(elem, options) - - # shorten ID names as much as possible - if options.shorten_ids: - numBytesSavedInIDs += shortenIDs(doc, options.shorten_ids_prefix, unprotected_ids(doc, options)) - - # scour lengths (including coordinates) - for type in ['svg', 'image', 'rect', 'circle', 'ellipse', 'line', 'linearGradient', 'radialGradient', 'stop', 'filter']: - for elem in doc.getElementsByTagName(type): - for attr in ['x', 'y', 'width', 'height', 'cx', 'cy', 'r', 'rx', 'ry', - 'x1', 'y1', 'x2', 'y2', 'fx', 'fy', 'offset']: - if elem.getAttribute(attr) != '': - elem.setAttribute(attr, scourLength(elem.getAttribute(attr))) - - # more length scouring in this function - numBytesSavedInLengths = reducePrecision(doc.documentElement) - - # remove default values of attributes - numAttrsRemoved += removeDefaultAttributeValues(doc.documentElement, options) - - # reduce the length of transformation attributes - numBytesSavedInTransforms = optimizeTransforms(doc.documentElement, options) - - # convert rasters references to base64-encoded strings - if options.embed_rasters: - for elem in doc.documentElement.getElementsByTagName('image') : - embedRasters(elem, options) - - # properly size the SVG document (ideally width/height should be 100% with a viewBox) - if options.enable_viewboxing: - properlySizeDoc(doc.documentElement, options) - - # output the document as a pretty string with a single space for indent - # NOTE: removed pretty printing because of this problem: - # http://ronrothman.com/public/leftbraned/xml-dom-minidom-toprettyxml-and-silly-whitespace/ - # rolled our own serialize function here to save on space, put id first, customize indentation, etc + # output the document as a pretty string with a single space for indent + # NOTE: removed pretty printing because of this problem: + # http://ronrothman.com/public/leftbraned/xml-dom-minidom-toprettyxml-and-silly-whitespace/ + # rolled our own serialize function here to save on space, put id first, customize indentation, etc # out_string = doc.documentElement.toprettyxml(' ') - out_string = serializeXML(doc.documentElement, options) + '\n' + out_string = serializeXML(doc.documentElement, options) + '\n' - # now strip out empty lines - lines = [] - # Get rid of empty lines - for line in out_string.splitlines(True): - if line.strip(): - lines.append(line) + # return the string with its XML prolog and surrounding comments + if options.strip_xml_prolog is False: + total_output = '<?xml version="1.0" encoding="UTF-8"' + if doc.standalone: + total_output += ' standalone="yes"' + total_output += '?>\n' + else: + total_output = "" - # return the string with its XML prolog and surrounding comments - if options.strip_xml_prolog == False: - total_output = '<?xml version="1.0" encoding="UTF-8" standalone="no"?>\n' - else: - total_output = "" - - for child in doc.childNodes: - if child.nodeType == 1: - total_output += "".join(lines) - else: # doctypes, entities, comments - total_output += child.toxml() + '\n' - - return total_output + for child in doc.childNodes: + if child.nodeType == Node.ELEMENT_NODE: + total_output += out_string + else: # doctypes, entities, comments + total_output += child.toxml() + '\n' + return total_output # used mostly by unit tests # input is a filename # returns the minidom doc representation of the SVG -def scourXmlFile(filename, options=None): - in_string = open(filename).read() - out_string = scourString(in_string, options) - return xml.dom.minidom.parseString(out_string.encode('utf-8')) +def scourXmlFile(filename, options=None, stats=None): + # sanitize options (take missing attributes from defaults, discard unknown attributes) + options = sanitizeOptions(options) + # we need to make sure infilename is set correctly (otherwise relative references in the SVG won't work) + options.ensure_value("infilename", filename) + # open the file and scour it + with open(filename, "rb") as f: + in_string = f.read() + out_string = scourString(in_string, options, stats=stats) + + # prepare the output xml.dom.minidom object + doc = xml.dom.minidom.parseString(out_string.encode('utf-8')) + + # since minidom does not seem to parse DTDs properly + # manually declare all attributes with name "id" to be of type ID + # (otherwise things like doc.getElementById() won't work) + all_nodes = doc.getElementsByTagName("*") + for node in all_nodes: + try: + node.setIdAttribute('id') + except NotFoundErr: + pass + + return doc # GZ: Seems most other commandline tools don't do this, is it really wanted? class HeaderedFormatter(optparse.IndentedHelpFormatter): - """ - Show application name, version number, and copyright statement - above usage information. - """ - def format_usage(self, usage): - return "%s %s\n%s\n%s" % (APP, VER, COPYRIGHT, - optparse.IndentedHelpFormatter.format_usage(self, usage)) + """ + Show application name, version number, and copyright statement + above usage information. + """ + def format_usage(self, usage): + return "%s %s\n%s\n%s" % (APP, VER, COPYRIGHT, + optparse.IndentedHelpFormatter.format_usage(self, usage)) # GZ: would prefer this to be in a function or class scope, but tests etc need # access to the defaults anyway _options_parser = optparse.OptionParser( - usage="%prog [-i input.svg] [-o output.svg] [OPTIONS]", - description=("If the input/output files are specified with a svgz" - " extension, then compressed SVG is assumed. If the input file is not" - " specified, stdin is used. If the output file is not specified, " - " stdout is used."), - formatter=HeaderedFormatter(max_help_position=30), - version=VER) + usage="%prog [INPUT.SVG [OUTPUT.SVG]] [OPTIONS]", + description=("If the input/output files are not specified, stdin/stdout are used. " + "If the input/output files are specified with a svgz extension, " + "then compressed SVG is assumed."), + formatter=HeaderedFormatter(max_help_position=33), + version=VER) -_options_parser.add_option("--disable-simplify-colors", - action="store_false", dest="simple_colors", default=True, - help="won't convert all colors to #RRGGBB format") -_options_parser.add_option("--disable-style-to-xml", - action="store_false", dest="style_to_xml", default=True, - help="won't convert styles into XML attributes") -_options_parser.add_option("--disable-group-collapsing", - action="store_false", dest="group_collapse", default=True, - help="won't collapse <g> elements") -_options_parser.add_option("--create-groups", - action="store_true", dest="group_create", default=False, - help="create <g> elements for runs of elements with identical attributes") -_options_parser.add_option("--enable-id-stripping", - action="store_true", dest="strip_ids", default=False, - help="remove all un-referenced ID attributes") -_options_parser.add_option("--enable-comment-stripping", - action="store_true", dest="strip_comments", default=False, - help="remove all <!-- --> comments") -_options_parser.add_option("--shorten-ids", - action="store_true", dest="shorten_ids", default=False, - help="shorten all ID attributes to the least number of letters possible") -_options_parser.add_option("--shorten-ids-prefix", - action="store", type="string", dest="shorten_ids_prefix", default="", - help="shorten all ID attributes with a custom prefix") -_options_parser.add_option("--disable-embed-rasters", - action="store_false", dest="embed_rasters", default=True, - help="won't embed rasters as base64-encoded data") -_options_parser.add_option("--keep-editor-data", - action="store_true", dest="keep_editor_data", default=False, - help="won't remove Inkscape, Sodipodi or Adobe Illustrator elements and attributes") -_options_parser.add_option("--remove-metadata", - action="store_true", dest="remove_metadata", default=False, - help="remove <metadata> elements (which may contain license metadata etc.)") -_options_parser.add_option("--renderer-workaround", - action="store_true", dest="renderer_workaround", default=True, - help="work around various renderer bugs (currently only librsvg) (default)") -_options_parser.add_option("--no-renderer-workaround", - action="store_false", dest="renderer_workaround", default=True, - help="do not work around various renderer bugs (currently only librsvg)") -_options_parser.add_option("--strip-xml-prolog", - action="store_true", dest="strip_xml_prolog", default=False, - help="won't output the <?xml ?> prolog") -_options_parser.add_option("--enable-viewboxing", - action="store_true", dest="enable_viewboxing", default=False, - help="changes document width/height to 100%/100% and creates viewbox coordinates") +# legacy options (kept around for backwards compatibility, should not be used in new code) +_options_parser.add_option("-p", action="store", type=int, dest="digits", help=optparse.SUPPRESS_HELP) -# GZ: this is confusing, most people will be thinking in terms of -# decimal places, which is not what decimal precision is doing -_options_parser.add_option("-p", "--set-precision", - action="store", type=int, dest="digits", default=5, - help="set number of significant digits (default: %default)") -_options_parser.add_option("-i", - action="store", dest="infilename", help=optparse.SUPPRESS_HELP) -_options_parser.add_option("-o", - action="store", dest="outfilename", help=optparse.SUPPRESS_HELP) +# general options _options_parser.add_option("-q", "--quiet", - action="store_true", dest="quiet", default=False, - help="suppress non-error output") -_options_parser.add_option("--indent", - action="store", type="string", dest="indent_type", default="space", - help="indentation of the output: none, space, tab (default: %default)") -_options_parser.add_option("--protect-ids-noninkscape", - action="store_true", dest="protect_ids_noninkscape", default=False, - help="Don't change IDs not ending with a digit") -_options_parser.add_option("--protect-ids-list", - action="store", type="string", dest="protect_ids_list", default=None, - help="Don't change IDs given in a comma-separated list") -_options_parser.add_option("--protect-ids-prefix", - action="store", type="string", dest="protect_ids_prefix", default=None, - help="Don't change IDs starting with the given prefix") + action="store_true", dest="quiet", default=False, + help="suppress non-error output") +_options_parser.add_option("-v", "--verbose", + action="store_true", dest="verbose", default=False, + help="verbose output (statistics, etc.)") +_options_parser.add_option("-i", + action="store", dest="infilename", metavar="INPUT.SVG", + help="alternative way to specify input filename") +_options_parser.add_option("-o", + action="store", dest="outfilename", metavar="OUTPUT.SVG", + help="alternative way to specify output filename") +_option_group_optimization = optparse.OptionGroup(_options_parser, "Optimization") +_option_group_optimization.add_option("--set-precision", + action="store", type=int, dest="digits", default=5, metavar="NUM", + help="set number of significant digits (default: %default)") +_option_group_optimization.add_option("--set-c-precision", + action="store", type=int, dest="cdigits", default=-1, metavar="NUM", + help="set number of significant digits for control points " + "(default: same as '--set-precision')") +_option_group_optimization.add_option("--disable-simplify-colors", + action="store_false", dest="simple_colors", default=True, + help="won't convert colors to #RRGGBB format") +_option_group_optimization.add_option("--disable-style-to-xml", + action="store_false", dest="style_to_xml", default=True, + help="won't convert styles into XML attributes") +_option_group_optimization.add_option("--disable-group-collapsing", + action="store_false", dest="group_collapse", default=True, + help="won't collapse <g> elements") +_option_group_optimization.add_option("--create-groups", + action="store_true", dest="group_create", default=False, + help="create <g> elements for runs of elements with identical attributes") +_option_group_optimization.add_option("--keep-editor-data", + action="store_true", dest="keep_editor_data", default=False, + help="won't remove Inkscape, Sodipodi, Adobe Illustrator " + "or Sketch elements and attributes") +_option_group_optimization.add_option("--nonsci-output", + action="store_true", dest="nonsci_output", default=False, + help="Remove scientific notation from path data") +_option_group_optimization.add_option("--keep-unreferenced-defs", + action="store_true", dest="keep_defs", default=False, + help="won't remove elements within the defs container that are unreferenced") +_option_group_optimization.add_option("--renderer-workaround", + action="store_true", dest="renderer_workaround", default=True, + help="work around various renderer bugs (currently only librsvg) (default)") +_option_group_optimization.add_option("--no-renderer-workaround", + action="store_false", dest="renderer_workaround", default=True, + help="do not work around various renderer bugs (currently only librsvg)") +_options_parser.add_option_group(_option_group_optimization) + +_option_group_document = optparse.OptionGroup(_options_parser, "SVG document") +_option_group_document.add_option("--strip-xml-prolog", + action="store_true", dest="strip_xml_prolog", default=False, + help="won't output the XML prolog (<?xml ?>)") +_option_group_document.add_option("--remove-titles", + action="store_true", dest="remove_titles", default=False, + help="remove <title> elements") +_option_group_document.add_option("--remove-descriptions", + action="store_true", dest="remove_descriptions", default=False, + help="remove <desc> elements") +_option_group_document.add_option("--remove-metadata", + action="store_true", dest="remove_metadata", default=False, + help="remove <metadata> elements " + "(which may contain license/author information etc.)") +_option_group_document.add_option("--remove-descriptive-elements", + action="store_true", dest="remove_descriptive_elements", default=False, + help="remove <title>, <desc> and <metadata> elements") +_option_group_document.add_option("--enable-comment-stripping", + action="store_true", dest="strip_comments", default=False, + help="remove all comments (<!-- -->)") +_option_group_document.add_option("--disable-embed-rasters", + action="store_false", dest="embed_rasters", default=True, + help="won't embed rasters as base64-encoded data") +_option_group_document.add_option("--enable-viewboxing", + action="store_true", dest="enable_viewboxing", default=False, + help="changes document width/height to 100%/100% and creates viewbox coordinates") +_options_parser.add_option_group(_option_group_document) + +_option_group_formatting = optparse.OptionGroup(_options_parser, "Output formatting") +_option_group_formatting.add_option("--indent", + action="store", type="string", dest="indent_type", default="space", metavar="TYPE", + help="indentation of the output: none, space, tab (default: %default)") +_option_group_formatting.add_option("--nindent", + action="store", type=int, dest="indent_depth", default=1, metavar="NUM", + help="depth of the indentation, i.e. number of spaces/tabs: (default: %default)") +_option_group_formatting.add_option("--no-line-breaks", + action="store_false", dest="newlines", default=True, + help="do not create line breaks in output" + "(also disables indentation; might be overridden by xml:space=\"preserve\")") +_option_group_formatting.add_option("--strip-xml-space", + action="store_true", dest="strip_xml_space_attribute", default=False, + help="strip the xml:space=\"preserve\" attribute from the root SVG element") +_options_parser.add_option_group(_option_group_formatting) + +_option_group_ids = optparse.OptionGroup(_options_parser, "ID attributes") +_option_group_ids.add_option("--enable-id-stripping", + action="store_true", dest="strip_ids", default=False, + help="remove all unreferenced IDs") +_option_group_ids.add_option("--shorten-ids", + action="store_true", dest="shorten_ids", default=False, + help="shorten all IDs to the least number of letters possible") +_option_group_ids.add_option("--shorten-ids-prefix", + action="store", type="string", dest="shorten_ids_prefix", default="", metavar="PREFIX", + help="add custom prefix to shortened IDs") +_option_group_ids.add_option("--protect-ids-noninkscape", + action="store_true", dest="protect_ids_noninkscape", default=False, + help="don't remove IDs not ending with a digit") +_option_group_ids.add_option("--protect-ids-list", + action="store", type="string", dest="protect_ids_list", metavar="LIST", + help="don't remove IDs given in this comma-separated list") +_option_group_ids.add_option("--protect-ids-prefix", + action="store", type="string", dest="protect_ids_prefix", metavar="PREFIX", + help="don't remove IDs starting with the given prefix") +_options_parser.add_option_group(_option_group_ids) + +_option_group_compatibility = optparse.OptionGroup(_options_parser, "SVG compatibility checks") +_option_group_compatibility.add_option("--error-on-flowtext", + action="store_true", dest="error_on_flowtext", default=False, + help="exit with error if the input SVG uses non-standard flowing text " + "(only warn by default)") +_options_parser.add_option_group(_option_group_compatibility) + + +def parse_args(args=None, ignore_additional_args=False): + options, rargs = _options_parser.parse_args(args) + + if rargs: + if not options.infilename: + options.infilename = rargs.pop(0) + if not options.outfilename and rargs: + options.outfilename = rargs.pop(0) + if not ignore_additional_args and rargs: + _options_parser.error("Additional arguments not handled: %r, see --help" % rargs) + if options.digits < 1: + _options_parser.error("Number of significant digits has to be larger than zero, see --help") + if options.cdigits > options.digits: + options.cdigits = -1 + print("WARNING: The value for '--set-c-precision' should be lower than the value for '--set-precision'. " + "Number of significant digits for control points reset to default value, see --help", file=sys.stderr) + if options.indent_type not in ['tab', 'space', 'none']: + _options_parser.error("Invalid value for --indent, see --help") + if options.indent_depth < 0: + _options_parser.error("Value for --nindent should be positive (or zero), see --help") + if options.infilename and options.outfilename and options.infilename == options.outfilename: + _options_parser.error("Input filename is the same as output filename") + + return options + + +# this function was replaced by 'sanitizeOptions()' and is only kept for backwards compatibility +# TODO: delete this at some point or continue to keep it around? +def generateDefaultOptions(): + return sanitizeOptions() + + +# sanitizes options by updating attributes in a set of defaults options while discarding unknown attributes +def sanitizeOptions(options=None): + optionsDict = dict((key, getattr(options, key)) for key in dir(options) if not key.startswith('__')) + + sanitizedOptions = _options_parser.get_default_values() + sanitizedOptions._update_careful(optionsDict) + + return sanitizedOptions def maybe_gziped_file(filename, mode="r"): - if os.path.splitext(filename)[1].lower() in (".svgz", ".gz"): - import gzip - return gzip.GzipFile(filename, mode) - return file(filename, mode) + if os.path.splitext(filename)[1].lower() in (".svgz", ".gz"): + import gzip + return gzip.GzipFile(filename, mode) + return open(filename, mode) +def getInOut(options): + if options.infilename: + infile = maybe_gziped_file(options.infilename, "rb") + # GZ: could catch a raised IOError here and report + else: + # GZ: could sniff for gzip compression here + # + # open the binary buffer of stdin and let XML parser handle decoding + try: + infile = sys.stdin.buffer + except AttributeError: + infile = sys.stdin + # the user probably does not want to manually enter SVG code into the terminal... + if sys.stdin.isatty(): + _options_parser.error("No input file specified, see --help for detailed usage information") -def parse_args(args=None): - options, rargs = _options_parser.parse_args(args) + if options.outfilename: + outfile = maybe_gziped_file(options.outfilename, "wb") + else: + # open the binary buffer of stdout as the output is already encoded + try: + outfile = sys.stdout.buffer + except AttributeError: + outfile = sys.stdout + # redirect informational output to stderr when SVG is output to stdout + options.stdout = sys.stderr - if rargs: - _options_parser.error("Additional arguments not handled: %r, see --help" % rargs) - if options.digits < 0: - _options_parser.error("Can't have negative significant digits, see --help") - if not options.indent_type in ["tab", "space", "none"]: - _options_parser.error("Invalid value for --indent, see --help") - if options.infilename and options.outfilename and options.infilename == options.outfilename: - _options_parser.error("Input filename is the same as output filename") - - if options.infilename: - infile = maybe_gziped_file(options.infilename) - # GZ: could catch a raised IOError here and report - else: - # GZ: could sniff for gzip compression here - infile = sys.stdin - if options.outfilename: - outfile = maybe_gziped_file(options.outfilename, "wb") - else: - outfile = sys.stdout - - return options, [infile, outfile] + return [infile, outfile] - -def getReport(): - return ' Number of elements removed: ' + str(numElemsRemoved) + os.linesep + \ - ' Number of attributes removed: ' + str(numAttrsRemoved) + os.linesep + \ - ' Number of unreferenced id attributes removed: ' + str(numIDsRemoved) + os.linesep + \ - ' Number of style properties fixed: ' + str(numStylePropsFixed) + os.linesep + \ - ' Number of raster images embedded inline: ' + str(numRastersEmbedded) + os.linesep + \ - ' Number of path segments reduced/removed: ' + str(numPathSegmentsReduced) + os.linesep + \ - ' Number of bytes saved in path data: ' + str(numBytesSavedInPathData) + os.linesep + \ - ' Number of bytes saved in colors: ' + str(numBytesSavedInColors) + os.linesep + \ - ' Number of points removed from polygons: ' + str(numPointsRemovedFromPolygon) + os.linesep + \ - ' Number of bytes saved in comments: ' + str(numCommentBytes) + os.linesep + \ - ' Number of bytes saved in id attributes: ' + str(numBytesSavedInIDs) + os.linesep + \ - ' Number of bytes saved in lengths: ' + str(numBytesSavedInLengths) + os.linesep + \ - ' Number of bytes saved in transformations: ' + str(numBytesSavedInTransforms) - - - -def generateDefaultOptions(): - ## FIXME: clean up this mess/hack and refactor arg parsing to argparse - class Struct: - def __init__(self, **entries): - self.__dict__.update(entries) - - d = parse_args()[0].__dict__.copy() - - return Struct(**d) - +def generate_report(stats): + return ( + ' Number of elements removed: ' + str(stats.num_elements_removed) + os.linesep + + ' Number of attributes removed: ' + str(stats.num_attributes_removed) + os.linesep + + ' Number of unreferenced IDs removed: ' + str(stats.num_ids_removed) + os.linesep + + ' Number of comments removed: ' + str(stats.num_comments_removed) + os.linesep + + ' Number of style properties fixed: ' + str(stats.num_style_properties_fixed) + os.linesep + + ' Number of raster images embedded: ' + str(stats.num_rasters_embedded) + os.linesep + + ' Number of path segments reduced/removed: ' + str(stats.num_path_segments_removed) + os.linesep + + ' Number of points removed from polygons: ' + str(stats.num_points_removed_from_polygon) + os.linesep + + ' Number of bytes saved in path data: ' + str(stats.num_bytes_saved_in_path_data) + os.linesep + + ' Number of bytes saved in colors: ' + str(stats.num_bytes_saved_in_colors) + os.linesep + + ' Number of bytes saved in comments: ' + str(stats.num_bytes_saved_in_comments) + os.linesep + + ' Number of bytes saved in IDs: ' + str(stats.num_bytes_saved_in_ids) + os.linesep + + ' Number of bytes saved in lengths: ' + str(stats.num_bytes_saved_in_lengths) + os.linesep + + ' Number of bytes saved in transformations: ' + str(stats.num_bytes_saved_in_transforms) + ) def start(options, input, output): - if sys.platform == "win32": - from time import clock as get_tick - else: - # GZ: is this different from time.time() in any way? - def get_tick(): - return os.times()[0] + # sanitize options (take missing attributes from defaults, discard unknown attributes) + options = sanitizeOptions(options) - start = get_tick() + start = time.time() + stats = ScourStats() - if not options.quiet: - print >>sys.stderr, "%s %s\n%s" % (APP, VER, COPYRIGHT) + # do the work + in_string = input.read() + out_string = scourString(in_string, options, stats=stats).encode("UTF-8") + output.write(out_string) - # do the work - in_string = input.read() - out_string = scourString(in_string, options).encode("UTF-8") - output.write(out_string) + # Close input and output files (but do not attempt to close stdin/stdout!) + if not ((input is sys.stdin) or (hasattr(sys.stdin, 'buffer') and input is sys.stdin.buffer)): + input.close() + if not ((output is sys.stdout) or (hasattr(sys.stdout, 'buffer') and output is sys.stdout.buffer)): + output.close() - # Close input and output files - input.close() - output.close() + end = time.time() - end = get_tick() + # run-time in ms + duration = int(round((end - start) * 1000.)) - # GZ: not using globals would be good too - if not options.quiet: - print >>sys.stderr, ' File:', input.name, \ - os.linesep + ' Time taken:', str(end-start) + 's' + os.linesep, \ - getReport() - - oldsize = len(in_string) - newsize = len(out_string) - sizediff = (newsize / oldsize) * 100 - print >>sys.stderr, ' Original file size:', oldsize, 'bytes;', \ - 'new file size:', newsize, 'bytes (' + str(sizediff)[:5] + '%)' + oldsize = len(in_string) + newsize = len(out_string) + sizediff = (newsize / oldsize) * 100. + if not options.quiet: + print('Scour processed file "{}" in {} ms: {}/{} bytes new/orig -> {:.1f}%'.format( + input.name, + duration, + newsize, + oldsize, + sizediff), file=options.ensure_value("stdout", sys.stdout)) + if options.verbose: + print(generate_report(stats), file=options.ensure_value("stdout", sys.stdout)) def run(): - options, (input, output) = parse_args() - start(options, input, output) - + options = parse_args() + (input, output) = getInOut(options) + start(options, input, output) if __name__ == '__main__': - run() + run() diff --git a/scour/stats.py b/scour/stats.py new file mode 100644 index 0000000..2762b92 --- /dev/null +++ b/scour/stats.py @@ -0,0 +1,28 @@ +class ScourStats(object): + + __slots__ = ( + 'num_elements_removed', + 'num_attributes_removed', + 'num_style_properties_fixed', + 'num_bytes_saved_in_colors', + 'num_ids_removed', + 'num_comments_removed', + 'num_style_properties_fixed', + 'num_rasters_embedded', + 'num_path_segments_removed', + 'num_points_removed_from_polygon', + 'num_bytes_saved_in_path_data', + 'num_bytes_saved_in_colors', + 'num_bytes_saved_in_comments', + 'num_bytes_saved_in_ids', + 'num_bytes_saved_in_lengths', + 'num_bytes_saved_in_transforms', + ) + + def __init__(self): + self.reset() + + def reset(self): + # Set all stats to 0 + for attr in self.__slots__: + setattr(self, attr, 0) diff --git a/scour/svg_regex.py b/scour/svg_regex.py index ce83c7b..c62ba2a 100644 --- a/scour/svg_regex.py +++ b/scour/svg_regex.py @@ -41,15 +41,22 @@ Out[4]: [('M', [(0.60509999999999997, 0.5)])] In [5]: svg_parser.parse('M 100-200') # Another edge case Out[5]: [('M', [(100.0, -200.0)])] """ +from __future__ import absolute_import import re -from decimal import * +from decimal import Decimal, getcontext +from functools import partial # Sentinel. + + class _EOF(object): + def __repr__(self): return 'EOF' + + EOF = _EOF() lexicon = [ @@ -69,6 +76,7 @@ class Lexer(object): http://www.gooli.org/blog/a-simple-lexer-in-python/ """ + def __init__(self, lexicon): self.lexicon = lexicon parts = [] @@ -91,6 +99,7 @@ class Lexer(object): break yield (EOF, None) + svg_lexer = Lexer(lexicon) @@ -145,140 +154,148 @@ class SVGPathParser(object): def parse(self, text): """ Parse a string of SVG <path> data. """ - next = self.lexer.lex(text).next - token = next() - return self.rule_svg_path(next, token) + gen = self.lexer.lex(text) + next_val_fn = partial(next, *(gen,)) + token = next_val_fn() + return self.rule_svg_path(next_val_fn, token) - def rule_svg_path(self, next, token): + def rule_svg_path(self, next_val_fn, token): commands = [] while token[0] is not EOF: if token[0] != 'command': raise SyntaxError("expecting a command; got %r" % (token,)) rule = self.command_dispatch[token[1]] - command_group, token = rule(next, token) + command_group, token = rule(next_val_fn, token) commands.append(command_group) return commands - def rule_closepath(self, next, token): + def rule_closepath(self, next_val_fn, token): command = token[1] - token = next() + token = next_val_fn() return (command, []), token - def rule_moveto_or_lineto(self, next, token): + def rule_moveto_or_lineto(self, next_val_fn, token): command = token[1] - token = next() + token = next_val_fn() coordinates = [] while token[0] in self.number_tokens: - pair, token = self.rule_coordinate_pair(next, token) + pair, token = self.rule_coordinate_pair(next_val_fn, token) coordinates.extend(pair) return (command, coordinates), token - def rule_orthogonal_lineto(self, next, token): + def rule_orthogonal_lineto(self, next_val_fn, token): command = token[1] - token = next() + token = next_val_fn() coordinates = [] while token[0] in self.number_tokens: - coord, token = self.rule_coordinate(next, token) + coord, token = self.rule_coordinate(next_val_fn, token) coordinates.append(coord) return (command, coordinates), token - def rule_curveto3(self, next, token): + def rule_curveto3(self, next_val_fn, token): command = token[1] - token = next() + token = next_val_fn() coordinates = [] while token[0] in self.number_tokens: - pair1, token = self.rule_coordinate_pair(next, token) - pair2, token = self.rule_coordinate_pair(next, token) - pair3, token = self.rule_coordinate_pair(next, token) + pair1, token = self.rule_coordinate_pair(next_val_fn, token) + pair2, token = self.rule_coordinate_pair(next_val_fn, token) + pair3, token = self.rule_coordinate_pair(next_val_fn, token) coordinates.extend(pair1) coordinates.extend(pair2) coordinates.extend(pair3) return (command, coordinates), token - def rule_curveto2(self, next, token): + def rule_curveto2(self, next_val_fn, token): command = token[1] - token = next() + token = next_val_fn() coordinates = [] while token[0] in self.number_tokens: - pair1, token = self.rule_coordinate_pair(next, token) - pair2, token = self.rule_coordinate_pair(next, token) + pair1, token = self.rule_coordinate_pair(next_val_fn, token) + pair2, token = self.rule_coordinate_pair(next_val_fn, token) coordinates.extend(pair1) coordinates.extend(pair2) return (command, coordinates), token - def rule_curveto1(self, next, token): + def rule_curveto1(self, next_val_fn, token): command = token[1] - token = next() + token = next_val_fn() coordinates = [] while token[0] in self.number_tokens: - pair1, token = self.rule_coordinate_pair(next, token) + pair1, token = self.rule_coordinate_pair(next_val_fn, token) coordinates.extend(pair1) return (command, coordinates), token - def rule_elliptical_arc(self, next, token): + def rule_elliptical_arc(self, next_val_fn, token): command = token[1] - token = next() + token = next_val_fn() arguments = [] while token[0] in self.number_tokens: rx = Decimal(token[1]) * 1 if rx < Decimal("0.0"): raise SyntaxError("expecting a nonnegative number; got %r" % (token,)) - token = next() + token = next_val_fn() if token[0] not in self.number_tokens: raise SyntaxError("expecting a number; got %r" % (token,)) ry = Decimal(token[1]) * 1 if ry < Decimal("0.0"): raise SyntaxError("expecting a nonnegative number; got %r" % (token,)) - token = next() + token = next_val_fn() if token[0] not in self.number_tokens: raise SyntaxError("expecting a number; got %r" % (token,)) axis_rotation = Decimal(token[1]) * 1 - token = next() - if token[1] not in ('0', '1'): + token = next_val_fn() + if token[1][0] not in ('0', '1'): raise SyntaxError("expecting a boolean flag; got %r" % (token,)) - large_arc_flag = Decimal(token[1]) * 1 + large_arc_flag = Decimal(token[1][0]) * 1 - token = next() - if token[1] not in ('0', '1'): + if len(token[1]) > 1: + token = list(token) + token[1] = token[1][1:] + else: + token = next_val_fn() + if token[1][0] not in ('0', '1'): raise SyntaxError("expecting a boolean flag; got %r" % (token,)) - sweep_flag = Decimal(token[1]) * 1 + sweep_flag = Decimal(token[1][0]) * 1 - token = next() + if len(token[1]) > 1: + token = list(token) + token[1] = token[1][1:] + else: + token = next_val_fn() if token[0] not in self.number_tokens: raise SyntaxError("expecting a number; got %r" % (token,)) x = Decimal(token[1]) * 1 - token = next() + token = next_val_fn() if token[0] not in self.number_tokens: raise SyntaxError("expecting a number; got %r" % (token,)) y = Decimal(token[1]) * 1 - token = next() + token = next_val_fn() arguments.extend([rx, ry, axis_rotation, large_arc_flag, sweep_flag, x, y]) return (command, arguments), token - def rule_coordinate(self, next, token): + def rule_coordinate(self, next_val_fn, token): if token[0] not in self.number_tokens: raise SyntaxError("expecting a number; got %r" % (token,)) x = getcontext().create_decimal(token[1]) - token = next() + token = next_val_fn() return x, token - - def rule_coordinate_pair(self, next, token): + def rule_coordinate_pair(self, next_val_fn, token): # Inline these since this rule is so common. if token[0] not in self.number_tokens: raise SyntaxError("expecting a number; got %r" % (token,)) x = getcontext().create_decimal(token[1]) - token = next() + token = next_val_fn() if token[0] not in self.number_tokens: raise SyntaxError("expecting a number; got %r" % (token,)) y = getcontext().create_decimal(token[1]) - token = next() + token = next_val_fn() return [x, y], token diff --git a/scour/svg_transform.py b/scour/svg_transform.py index 72fd06f..83454b3 100644 --- a/scour/svg_transform.py +++ b/scour/svg_transform.py @@ -56,15 +56,22 @@ Multiple transformations are supported: In [12]: svg_transform_parser.parse('translate(30 -30) rotate(36)') Out[12]: [('translate', [30.0, -30.0]), ('rotate', [36.0])] """ +from __future__ import absolute_import import re -from decimal import * +from decimal import Decimal +from functools import partial + +from six.moves import range # Sentinel. class _EOF(object): + def __repr__(self): return 'EOF' + + EOF = _EOF() lexicon = [ @@ -86,6 +93,7 @@ class Lexer(object): http://www.gooli.org/blog/a-simple-lexer-in-python/ """ + def __init__(self, lexicon): self.lexicon = lexicon parts = [] @@ -108,6 +116,7 @@ class Lexer(object): break yield (EOF, None) + svg_lexer = Lexer(lexicon) @@ -145,88 +154,90 @@ class SVGTransformationParser(object): def parse(self, text): """ Parse a string of SVG transform="" data. """ - next = self.lexer.lex(text).next + gen = self.lexer.lex(text) + next_val_fn = partial(next, *(gen,)) + commands = [] - token = next() + token = next_val_fn() while token[0] is not EOF: - command, token = self.rule_svg_transform(next, token) - commands.append(command) + command, token = self.rule_svg_transform(next_val_fn, token) + commands.append(command) return commands - def rule_svg_transform(self, next, token): + def rule_svg_transform(self, next_val_fn, token): if token[0] != 'command': raise SyntaxError("expecting a transformation type; got %r" % (token,)) command = token[1] rule = self.command_dispatch[command] - token = next() + token = next_val_fn() if token[0] != 'coordstart': raise SyntaxError("expecting '('; got %r" % (token,)) - numbers, token = rule(next, token) + numbers, token = rule(next_val_fn, token) if token[0] != 'coordend': raise SyntaxError("expecting ')'; got %r" % (token,)) - token = next() + token = next_val_fn() return (command, numbers), token - def rule_1or2numbers(self, next, token): + def rule_1or2numbers(self, next_val_fn, token): numbers = [] # 1st number is mandatory - token = next() - number, token = self.rule_number(next, token) + token = next_val_fn() + number, token = self.rule_number(next_val_fn, token) numbers.append(number) # 2nd number is optional - number, token = self.rule_optional_number(next, token) + number, token = self.rule_optional_number(next_val_fn, token) if number is not None: numbers.append(number) return numbers, token - def rule_1number(self, next, token): + def rule_1number(self, next_val_fn, token): # this number is mandatory - token = next() - number, token = self.rule_number(next, token) + token = next_val_fn() + number, token = self.rule_number(next_val_fn, token) numbers = [number] return numbers, token - def rule_1or3numbers(self, next, token): + def rule_1or3numbers(self, next_val_fn, token): numbers = [] # 1st number is mandatory - token = next() - number, token = self.rule_number(next, token) + token = next_val_fn() + number, token = self.rule_number(next_val_fn, token) numbers.append(number) # 2nd number is optional - number, token = self.rule_optional_number(next, token) + number, token = self.rule_optional_number(next_val_fn, token) if number is not None: # but, if the 2nd number is provided, the 3rd is mandatory. # we can't have just 2. numbers.append(number) - number, token = self.rule_number(next, token) + number, token = self.rule_number(next_val_fn, token) numbers.append(number) return numbers, token - def rule_6numbers(self, next, token): + def rule_6numbers(self, next_val_fn, token): numbers = [] - token = next() + token = next_val_fn() # all numbers are mandatory - for i in xrange(6): - number, token = self.rule_number(next, token) + for i in range(6): + number, token = self.rule_number(next_val_fn, token) numbers.append(number) return numbers, token - def rule_number(self, next, token): + def rule_number(self, next_val_fn, token): if token[0] not in self.number_tokens: raise SyntaxError("expecting a number; got %r" % (token,)) x = Decimal(token[1]) * 1 - token = next() + token = next_val_fn() return x, token - def rule_optional_number(self, next, token): + def rule_optional_number(self, next_val_fn, token): if token[0] not in self.number_tokens: return None, token else: x = Decimal(token[1]) * 1 - token = next() + token = next_val_fn() return x, token diff --git a/scour/yocto_css.py b/scour/yocto_css.py index 3efeeda..0aaac5a 100644 --- a/scour/yocto_css.py +++ b/scour/yocto_css.py @@ -48,25 +48,29 @@ # | DASHMATCH | FUNCTION S* any* ')' # | '(' S* any* ')' | '[' S* any* ']' ] S*; + def parseCssString(str): - rules = [] - # first, split on } to get the rule chunks - chunks = str.split('}') - for chunk in chunks: - # second, split on { to get the selector and the list of properties - bits = chunk.split('{') - if len(bits) != 2: continue - rule = {} - rule['selector'] = bits[0].strip() - # third, split on ; to get the property declarations - bites = bits[1].strip().split(';') - if len(bites) < 1: continue - props = {} - for bite in bites: - # fourth, split on : to get the property name and value - nibbles = bite.strip().split(':') - if len(nibbles) != 2: continue - props[nibbles[0].strip()] = nibbles[1].strip() - rule['properties'] = props - rules.append(rule) - return rules + rules = [] + # first, split on } to get the rule chunks + chunks = str.split('}') + for chunk in chunks: + # second, split on { to get the selector and the list of properties + bits = chunk.split('{') + if len(bits) != 2: + continue + rule = {} + rule['selector'] = bits[0].strip() + # third, split on ; to get the property declarations + bites = bits[1].strip().split(';') + if len(bites) < 1: + continue + props = {} + for bite in bites: + # fourth, split on : to get the property name and value + nibbles = bite.strip().split(':') + if len(nibbles) != 2: + continue + props[nibbles[0].strip()] = nibbles[1].strip() + rule['properties'] = props + rules.append(rule) + return rules diff --git a/setup.py b/setup.py index 73a7134..990b596 100644 --- a/setup.py +++ b/setup.py @@ -1,64 +1,87 @@ ############################################################################### -## -## Copyright (C) 2013 Tavendo GmbH -## -## Licensed under the Apache License, Version 2.0 (the "License"); -## you may not use this file except in compliance with the License. -## You may obtain a copy of the License at -## -## http://www.apache.org/licenses/LICENSE-2.0 -## -## Unless required by applicable law or agreed to in writing, software -## distributed under the License is distributed on an "AS IS" BASIS, -## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -## See the License for the specific language governing permissions and -## limitations under the License. -## +# +# Copyright (C) 2013-2014 Tavendo GmbH +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ############################################################################### -from setuptools import setup, find_packages +import os +import re + +from setuptools import find_packages, setup LONGDESC = """ -Scour is a SVG optimizer/sanitizer that can be used to produce SVGs for Web deployment. +Scour is an SVG optimizer/cleaner that reduces the size of scalable +vector graphics by optimizing structure and removing unnecessary data. + +It can be used to create streamlined vector graphics suitable for web +deployment, publishing/sharing or further processing. + +The goal of Scour is to output a file that renders identically at a +fraction of the size by removing a lot of redundant information created +by most SVG editors. Optimization options are typically lossless but can +be tweaked for more aggressive cleaning. Website - http://www.codedread.com/scour/ (original website) - - https://github.com/oberstet/scour (today) + - https://github.com/scour-project/scour (today) Authors: - Jeff Schiller, Louis Simard (original authors) - Tobias Oberstein (maintainer) + - Patrick Storz (maintainer) """ -setup ( - name = 'scour', - version = '0.27', - description = 'Scour SVG Optimizer', -# long_description = open("README.md").read(), - long_description = LONGDESC, - license = 'Apache License 2.0', - author = 'Jeff Schiller', - author_email = 'codedread@gmail.com', - url = 'https://github.com/oberstet/scour', - platforms = ('Any'), - install_requires = [], - packages = find_packages(), - zip_safe = True, - entry_points = { - 'console_scripts': [ - 'scour = scour.scour:run' - ]}, - classifiers = ["License :: OSI Approved :: Apache Software License", - "Development Status :: 5 - Production/Stable", - "Environment :: Console", - "Intended Audience :: Developers", - "Intended Audience :: System Administrators", - "Operating System :: OS Independent", - "Programming Language :: Python", - "Topic :: Internet", - "Topic :: Software Development :: Build Tools", - "Topic :: Software Development :: Pre-processors", - "Topic :: Multimedia :: Graphics :: Graphics Conversion", - "Topic :: Utilities"], - keywords = 'svg optimizer' +VERSIONFILE = os.path.join(os.path.dirname(os.path.realpath(__file__)), "scour", "__init__.py") +verstrline = open(VERSIONFILE, "rt").read() +VSRE = r"^__version__ = u['\"]([^'\"]*)['\"]" +mo = re.search(VSRE, verstrline, re.M) +if mo: + verstr = mo.group(1) +else: + raise RuntimeError("Unable to find version string in %s." % (VERSIONFILE,)) + + +setup( + name='scour', + version=verstr, + description='Scour SVG Optimizer', + # long_description = open("README.md").read(), + long_description=LONGDESC, + license='Apache License 2.0', + author='Jeff Schiller', + author_email='codedread@gmail.com', + url='https://github.com/scour-project/scour', + platforms=('Any'), + install_requires=['six>=1.9.0'], + packages=find_packages(), + zip_safe=True, + entry_points={ + 'console_scripts': [ + 'scour = scour.scour:run' + ]}, + classifiers=["License :: OSI Approved :: Apache Software License", + "Development Status :: 5 - Production/Stable", + "Environment :: Console", + "Intended Audience :: Developers", + "Intended Audience :: System Administrators", + "Operating System :: OS Independent", + "Programming Language :: Python", + "Topic :: Internet", + "Topic :: Software Development :: Build Tools", + "Topic :: Software Development :: Pre-processors", + "Topic :: Multimedia :: Graphics :: Graphics Conversion", + "Topic :: Utilities"], + keywords='svg optimizer' ) diff --git a/test_css.py b/test_css.py new file mode 100755 index 0000000..d7fd3e2 --- /dev/null +++ b/test_css.py @@ -0,0 +1,57 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +# Test Harness for Scour +# +# Copyright 2010 Jeff Schiller +# +# This file is part of Scour, http://www.codedread.com/scour/ +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import absolute_import + +import unittest + +from scour.yocto_css import parseCssString + + +class Blank(unittest.TestCase): + + def runTest(self): + r = parseCssString('') + self.assertEqual(len(r), 0, 'Blank string returned non-empty list') + self.assertEqual(type(r), type([]), 'Blank string returned non list') + + +class ElementSelector(unittest.TestCase): + + def runTest(self): + r = parseCssString('foo {}') + self.assertEqual(len(r), 1, 'Element selector not returned') + self.assertEqual(r[0]['selector'], 'foo', 'Selector for foo not returned') + self.assertEqual(len(r[0]['properties']), 0, 'Property list for foo not empty') + + +class ElementSelectorWithProperty(unittest.TestCase): + + def runTest(self): + r = parseCssString('foo { bar: baz}') + self.assertEqual(len(r), 1, 'Element selector not returned') + self.assertEqual(r[0]['selector'], 'foo', 'Selector for foo not returned') + self.assertEqual(len(r[0]['properties']), 1, 'Property list for foo did not have 1') + self.assertEqual(r[0]['properties']['bar'], 'baz', 'Property bar did not have baz value') + + +if __name__ == '__main__': + unittest.main() diff --git a/test_scour.py b/test_scour.py new file mode 100755 index 0000000..549333f --- /dev/null +++ b/test_scour.py @@ -0,0 +1,2796 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +# Test Harness for Scour +# +# Copyright 2010 Jeff Schiller +# Copyright 2010 Louis Simard +# +# This file is part of Scour, http://www.codedread.com/scour/ +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import print_function # use print() as a function in Python 2 (see PEP 3105) +from __future__ import absolute_import # use absolute imports by default in Python 2 (see PEP 328) + +import os +import sys +import unittest + +import six +from six.moves import map, range + +from scour.scour import (make_well_formed, parse_args, scourString, scourXmlFile, start, run, + XML_ENTS_ESCAPE_APOS, XML_ENTS_ESCAPE_QUOT) +from scour.svg_regex import svg_parser +from scour import __version__ + + +SVGNS = 'http://www.w3.org/2000/svg' + + +# I couldn't figure out how to get ElementTree to work with the following XPath +# "//*[namespace-uri()='http://example.com']" +# so I decided to use minidom and this helper function that performs a test on a given node +# and all its children +# func must return either True (if pass) or False (if fail) +def walkTree(elem, func): + if func(elem) is False: + return False + for child in elem.childNodes: + if walkTree(child, func) is False: + return False + return True + + +class ScourOptions: + pass + + +class EmptyOptions(unittest.TestCase): + + MINIMAL_SVG = '<?xml version="1.0" encoding="UTF-8"?>\n' \ + '<svg xmlns="http://www.w3.org/2000/svg"/>\n' + + def test_scourString(self): + options = ScourOptions + try: + scourString(self.MINIMAL_SVG, options) + fail = False + except Exception: + fail = True + self.assertEqual(fail, False, + 'Exception when calling "scourString" with empty options object') + + def test_scourXmlFile(self): + options = ScourOptions + try: + scourXmlFile('unittests/minimal.svg', options) + fail = False + except Exception: + fail = True + self.assertEqual(fail, False, + 'Exception when calling "scourXmlFile" with empty options object') + + def test_start(self): + options = ScourOptions + input = open('unittests/minimal.svg', 'rb') + output = open('testscour_temp.svg', 'wb') + + stdout_temp = sys.stdout + sys.stdout = None + try: + start(options, input, output) + fail = False + except Exception: + fail = True + sys.stdout = stdout_temp + + os.remove('testscour_temp.svg') + + self.assertEqual(fail, False, + 'Exception when calling "start" with empty options object') + + +class InvalidOptions(unittest.TestCase): + + def runTest(self): + options = ScourOptions + options.invalidOption = "invalid value" + try: + scourXmlFile('unittests/ids-to-strip.svg', options) + fail = False + except Exception: + fail = True + self.assertEqual(fail, False, + 'Exception when calling Scour with invalid options') + + +class GetElementById(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/ids.svg') + self.assertIsNotNone(doc.getElementById('svg1'), 'Root SVG element not found by ID') + self.assertIsNotNone(doc.getElementById('linearGradient1'), 'linearGradient not found by ID') + self.assertIsNotNone(doc.getElementById('layer1'), 'g not found by ID') + self.assertIsNotNone(doc.getElementById('rect1'), 'rect not found by ID') + self.assertIsNone(doc.getElementById('rect2'), 'Non-existing element found by ID') + + +class NoInkscapeElements(unittest.TestCase): + + def runTest(self): + self.assertNotEqual(walkTree(scourXmlFile('unittests/sodipodi.svg').documentElement, + lambda e: e.namespaceURI != 'http://www.inkscape.org/namespaces/inkscape'), + False, + 'Found Inkscape elements') + + +class NoSodipodiElements(unittest.TestCase): + + def runTest(self): + self.assertNotEqual(walkTree(scourXmlFile('unittests/sodipodi.svg').documentElement, + lambda e: e.namespaceURI != 'http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd'), + False, + 'Found Sodipodi elements') + + +class NoAdobeIllustratorElements(unittest.TestCase): + + def runTest(self): + self.assertNotEqual(walkTree(scourXmlFile('unittests/adobe.svg').documentElement, + lambda e: e.namespaceURI != 'http://ns.adobe.com/AdobeIllustrator/10.0/'), + False, + 'Found Adobe Illustrator elements') + + +class NoAdobeGraphsElements(unittest.TestCase): + + def runTest(self): + self.assertNotEqual(walkTree(scourXmlFile('unittests/adobe.svg').documentElement, + lambda e: e.namespaceURI != 'http://ns.adobe.com/Graphs/1.0/'), + False, + 'Found Adobe Graphs elements') + + +class NoAdobeSVGViewerElements(unittest.TestCase): + + def runTest(self): + self.assertNotEqual(walkTree(scourXmlFile('unittests/adobe.svg').documentElement, + lambda e: e.namespaceURI != 'http://ns.adobe.com/AdobeSVGViewerExtensions/3.0/'), + False, + 'Found Adobe SVG Viewer elements') + + +class NoAdobeVariablesElements(unittest.TestCase): + + def runTest(self): + self.assertNotEqual(walkTree(scourXmlFile('unittests/adobe.svg').documentElement, + lambda e: e.namespaceURI != 'http://ns.adobe.com/Variables/1.0/'), + False, + 'Found Adobe Variables elements') + + +class NoAdobeSaveForWebElements(unittest.TestCase): + + def runTest(self): + self.assertNotEqual(walkTree(scourXmlFile('unittests/adobe.svg').documentElement, + lambda e: e.namespaceURI != 'http://ns.adobe.com/SaveForWeb/1.0/'), + False, + 'Found Adobe Save For Web elements') + + +class NoAdobeExtensibilityElements(unittest.TestCase): + + def runTest(self): + self.assertNotEqual(walkTree(scourXmlFile('unittests/adobe.svg').documentElement, + lambda e: e.namespaceURI != 'http://ns.adobe.com/Extensibility/1.0/'), + False, + 'Found Adobe Extensibility elements') + + +class NoAdobeFlowsElements(unittest.TestCase): + + def runTest(self): + self.assertNotEqual(walkTree(scourXmlFile('unittests/adobe.svg').documentElement, + lambda e: e.namespaceURI != 'http://ns.adobe.com/Flows/1.0/'), + False, + 'Found Adobe Flows elements') + + +class NoAdobeImageReplacementElements(unittest.TestCase): + + def runTest(self): + self.assertNotEqual(walkTree(scourXmlFile('unittests/adobe.svg').documentElement, + lambda e: e.namespaceURI != 'http://ns.adobe.com/ImageReplacement/1.0/'), + False, + 'Found Adobe Image Replacement elements') + + +class NoAdobeCustomElements(unittest.TestCase): + + def runTest(self): + self.assertNotEqual(walkTree(scourXmlFile('unittests/adobe.svg').documentElement, + lambda e: e.namespaceURI != 'http://ns.adobe.com/GenericCustomNamespace/1.0/'), + False, + 'Found Adobe Custom elements') + + +class NoAdobeXPathElements(unittest.TestCase): + + def runTest(self): + self.assertNotEqual(walkTree(scourXmlFile('unittests/adobe.svg').documentElement, + lambda e: e.namespaceURI != 'http://ns.adobe.com/XPath/1.0/'), + False, + 'Found Adobe XPath elements') + + +class DoNotRemoveTitleWithOnlyText(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/descriptive-elements-with-text.svg') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'title')), 1, + 'Removed title element with only text child') + + +class RemoveEmptyTitleElement(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/empty-descriptive-elements.svg') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'title')), 0, + 'Did not remove empty title element') + + +class DoNotRemoveDescriptionWithOnlyText(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/descriptive-elements-with-text.svg') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'desc')), 1, + 'Removed description element with only text child') + + +class RemoveEmptyDescriptionElement(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/empty-descriptive-elements.svg') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'desc')), 0, + 'Did not remove empty description element') + + +class DoNotRemoveMetadataWithOnlyText(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/descriptive-elements-with-text.svg') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'metadata')), 1, + 'Removed metadata element with only text child') + + +class RemoveEmptyMetadataElement(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/empty-descriptive-elements.svg') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'metadata')), 0, + 'Did not remove empty metadata element') + + +class DoNotRemoveDescriptiveElementsWithOnlyText(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/descriptive-elements-with-text.svg') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'title')), 1, + 'Removed title element with only text child') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'desc')), 1, + 'Removed description element with only text child') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'metadata')), 1, + 'Removed metadata element with only text child') + + +class RemoveEmptyDescriptiveElements(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/empty-descriptive-elements.svg') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'title')), 0, + 'Did not remove empty title element') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'desc')), 0, + 'Did not remove empty description element') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'metadata')), 0, + 'Did not remove empty metadata element') + + +class RemoveEmptyGElements(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/empty-g.svg') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'g')), 1, + 'Did not remove empty g element') + + +class RemoveUnreferencedPattern(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/unreferenced-pattern.svg') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'pattern')), 0, + 'Unreferenced pattern not removed') + + +class RemoveUnreferencedLinearGradient(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/unreferenced-linearGradient.svg') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'linearGradient')), 0, + 'Unreferenced linearGradient not removed') + + +class RemoveUnreferencedRadialGradient(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/unreferenced-radialGradient.svg') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'radialradient')), 0, + 'Unreferenced radialGradient not removed') + + +class RemoveUnreferencedElementInDefs(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/referenced-elements-1.svg') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'rect')), 1, + 'Unreferenced rect left in defs') + + +class RemoveUnreferencedDefs(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/unreferenced-defs.svg') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'linearGradient')), 1, + 'Referenced linearGradient removed from defs') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'radialGradient')), 0, + 'Unreferenced radialGradient left in defs') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'pattern')), 0, + 'Unreferenced pattern left in defs') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'rect')), 1, + 'Referenced rect removed from defs') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'circle')), 0, + 'Unreferenced circle left in defs') + + +class KeepUnreferencedDefs(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/unreferenced-defs.svg', + parse_args(['--keep-unreferenced-defs'])) + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'linearGradient')), 1, + 'Referenced linearGradient removed from defs with `--keep-unreferenced-defs`') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'radialGradient')), 1, + 'Unreferenced radialGradient removed from defs with `--keep-unreferenced-defs`') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'pattern')), 1, + 'Unreferenced pattern removed from defs with `--keep-unreferenced-defs`') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'rect')), 1, + 'Referenced rect removed from defs with `--keep-unreferenced-defs`') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'circle')), 1, + 'Unreferenced circle removed from defs with `--keep-unreferenced-defs`') + + +class DoNotRemoveChainedRefsInDefs(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/refs-in-defs.svg') + g = doc.getElementsByTagNameNS(SVGNS, 'g')[0] + self.assertEqual(g.childNodes.length >= 2, True, + 'Chained references not honored in defs') + + +class KeepTitleInDefs(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/referenced-elements-1.svg') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'title')), 1, + 'Title removed from in defs') + + +class RemoveNestedDefs(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/nested-defs.svg') + allDefs = doc.getElementsByTagNameNS(SVGNS, 'defs') + self.assertEqual(len(allDefs), 1, 'More than one defs left in doc') + + +class KeepUnreferencedIDsWhenEnabled(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/ids-to-strip.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'svg')[0].getAttribute('id'), 'boo', + '<svg> ID stripped when it should be disabled') + + +class RemoveUnreferencedIDsWhenEnabled(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/ids-to-strip.svg', + parse_args(['--enable-id-stripping'])) + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'svg')[0].getAttribute('id'), '', + '<svg> ID not stripped') + + +class ProtectIDs(unittest.TestCase): + + def test_protect_none(self): + doc = scourXmlFile('unittests/ids-protect.svg', + parse_args(['--enable-id-stripping'])) + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'text')[0].getAttribute('id'), '', + "ID 'text1' not stripped when none of the '--protect-ids-_' options was specified") + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'text')[1].getAttribute('id'), '', + "ID 'text2' not stripped when none of the '--protect-ids-_' options was specified") + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'text')[2].getAttribute('id'), '', + "ID 'text3' not stripped when none of the '--protect-ids-_' options was specified") + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'text')[3].getAttribute('id'), '', + "ID 'text_custom' not stripped when none of the '--protect-ids-_' options was specified") + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'text')[4].getAttribute('id'), '', + "ID 'my_text1' not stripped when none of the '--protect-ids-_' options was specified") + + def test_protect_ids_noninkscape(self): + doc = scourXmlFile('unittests/ids-protect.svg', + parse_args(['--enable-id-stripping', '--protect-ids-noninkscape'])) + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'text')[0].getAttribute('id'), '', + "ID 'text1' should have been stripped despite '--protect-ids-noninkscape' being specified") + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'text')[1].getAttribute('id'), '', + "ID 'text2' should have been stripped despite '--protect-ids-noninkscape' being specified") + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'text')[2].getAttribute('id'), '', + "ID 'text3' should have been stripped despite '--protect-ids-noninkscape' being specified") + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'text')[3].getAttribute('id'), 'text_custom', + "ID 'text_custom' should NOT have been stripped because of '--protect-ids-noninkscape'") + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'text')[4].getAttribute('id'), '', + "ID 'my_text1' should have been stripped despite '--protect-ids-noninkscape' being specified") + + def test_protect_ids_list(self): + doc = scourXmlFile('unittests/ids-protect.svg', + parse_args(['--enable-id-stripping', '--protect-ids-list=text2,text3'])) + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'text')[0].getAttribute('id'), '', + "ID 'text1' should have been stripped despite '--protect-ids-list' being specified") + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'text')[1].getAttribute('id'), 'text2', + "ID 'text2' should NOT have been stripped because of '--protect-ids-list'") + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'text')[2].getAttribute('id'), 'text3', + "ID 'text3' should NOT have been stripped because of '--protect-ids-list'") + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'text')[3].getAttribute('id'), '', + "ID 'text_custom' should have been stripped despite '--protect-ids-list' being specified") + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'text')[4].getAttribute('id'), '', + "ID 'my_text1' should have been stripped despite '--protect-ids-list' being specified") + + def test_protect_ids_prefix(self): + doc = scourXmlFile('unittests/ids-protect.svg', + parse_args(['--enable-id-stripping', '--protect-ids-prefix=my'])) + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'text')[0].getAttribute('id'), '', + "ID 'text1' should have been stripped despite '--protect-ids-prefix' being specified") + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'text')[1].getAttribute('id'), '', + "ID 'text2' should have been stripped despite '--protect-ids-prefix' being specified") + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'text')[2].getAttribute('id'), '', + "ID 'text3' should have been stripped despite '--protect-ids-prefix' being specified") + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'text')[3].getAttribute('id'), '', + "ID 'text_custom' should have been stripped despite '--protect-ids-prefix' being specified") + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'text')[4].getAttribute('id'), 'my_text1', + "ID 'my_text1' should NOT have been stripped because of '--protect-ids-prefix'") + + +class RemoveUselessNestedGroups(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/nested-useless-groups.svg') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'g')), 1, + 'Useless nested groups not removed') + + +class DoNotRemoveUselessNestedGroups(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/nested-useless-groups.svg', + parse_args(['--disable-group-collapsing'])) + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'g')), 2, + 'Useless nested groups were removed despite --disable-group-collapsing') + + +class DoNotRemoveNestedGroupsWithTitle(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/groups-with-title-desc.svg') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'g')), 2, + 'Nested groups with title was removed') + + +class DoNotRemoveNestedGroupsWithDesc(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/groups-with-title-desc.svg') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'g')), 2, + 'Nested groups with desc was removed') + + +class RemoveDuplicateLinearGradientStops(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/duplicate-gradient-stops.svg') + grad = doc.getElementsByTagNameNS(SVGNS, 'linearGradient') + self.assertEqual(len(grad[0].getElementsByTagNameNS(SVGNS, 'stop')), 3, + 'Duplicate linear gradient stops not removed') + + +class RemoveDuplicateLinearGradientStopsPct(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/duplicate-gradient-stops-pct.svg') + grad = doc.getElementsByTagNameNS(SVGNS, 'linearGradient') + self.assertEqual(len(grad[0].getElementsByTagNameNS(SVGNS, 'stop')), 3, + 'Duplicate linear gradient stops with percentages not removed') + + +class RemoveDuplicateRadialGradientStops(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/duplicate-gradient-stops.svg') + grad = doc.getElementsByTagNameNS(SVGNS, 'radialGradient') + self.assertEqual(len(grad[0].getElementsByTagNameNS(SVGNS, 'stop')), 3, + 'Duplicate radial gradient stops not removed') + + +class NoSodipodiNamespaceDecl(unittest.TestCase): + + def runTest(self): + attrs = scourXmlFile('unittests/sodipodi.svg').documentElement.attributes + for i in range(len(attrs)): + self.assertNotEqual(attrs.item(i).nodeValue, + 'http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd', + 'Sodipodi namespace declaration found') + + +class NoInkscapeNamespaceDecl(unittest.TestCase): + + def runTest(self): + attrs = scourXmlFile('unittests/inkscape.svg').documentElement.attributes + for i in range(len(attrs)): + self.assertNotEqual(attrs.item(i).nodeValue, + 'http://www.inkscape.org/namespaces/inkscape', + 'Inkscape namespace declaration found') + + +class NoSodipodiAttributes(unittest.TestCase): + + def runTest(self): + def findSodipodiAttr(elem): + attrs = elem.attributes + if attrs is None: + return True + for i in range(len(attrs)): + if attrs.item(i).namespaceURI == 'http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd': + return False + return True + self.assertNotEqual(walkTree(scourXmlFile('unittests/sodipodi.svg').documentElement, findSodipodiAttr), + False, + 'Found Sodipodi attributes') + + +class NoInkscapeAttributes(unittest.TestCase): + + def runTest(self): + def findInkscapeAttr(elem): + attrs = elem.attributes + if attrs is None: + return True + for i in range(len(attrs)): + if attrs.item(i).namespaceURI == 'http://www.inkscape.org/namespaces/inkscape': + return False + return True + self.assertNotEqual(walkTree(scourXmlFile('unittests/inkscape.svg').documentElement, findInkscapeAttr), + False, + 'Found Inkscape attributes') + + +class KeepInkscapeNamespaceDeclarationsWhenKeepEditorData(unittest.TestCase): + + def runTest(self): + options = ScourOptions + options.keep_editor_data = True + attrs = scourXmlFile('unittests/inkscape.svg', options).documentElement.attributes + FoundNamespace = False + for i in range(len(attrs)): + if attrs.item(i).nodeValue == 'http://www.inkscape.org/namespaces/inkscape': + FoundNamespace = True + break + self.assertEqual(True, FoundNamespace, + "Did not find Inkscape namespace declaration when using --keep-editor-data") + return False + + +class KeepSodipodiNamespaceDeclarationsWhenKeepEditorData(unittest.TestCase): + + def runTest(self): + options = ScourOptions + options.keep_editor_data = True + attrs = scourXmlFile('unittests/sodipodi.svg', options).documentElement.attributes + FoundNamespace = False + for i in range(len(attrs)): + if attrs.item(i).nodeValue == 'http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd': + FoundNamespace = True + break + self.assertEqual(True, FoundNamespace, + "Did not find Sodipodi namespace declaration when using --keep-editor-data") + return False + + +class KeepReferencedFonts(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/referenced-font.svg') + fonts = doc.documentElement.getElementsByTagNameNS(SVGNS, 'font') + self.assertEqual(len(fonts), 1, + 'Font wrongly removed from <defs>') + + +class ConvertStyleToAttrs(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/stroke-transparent.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('style'), '', + 'style attribute not emptied') + + +class RemoveStrokeWhenStrokeTransparent(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/stroke-transparent.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('stroke'), '', + 'stroke attribute not emptied when stroke opacity zero') + + +class RemoveStrokeWidthWhenStrokeTransparent(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/stroke-transparent.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('stroke-width'), '', + 'stroke-width attribute not emptied when stroke opacity zero') + + +class RemoveStrokeLinecapWhenStrokeTransparent(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/stroke-transparent.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('stroke-linecap'), '', + 'stroke-linecap attribute not emptied when stroke opacity zero') + + +class RemoveStrokeLinejoinWhenStrokeTransparent(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/stroke-transparent.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('stroke-linejoin'), '', + 'stroke-linejoin attribute not emptied when stroke opacity zero') + + +class RemoveStrokeDasharrayWhenStrokeTransparent(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/stroke-transparent.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('stroke-dasharray'), '', + 'stroke-dasharray attribute not emptied when stroke opacity zero') + + +class RemoveStrokeDashoffsetWhenStrokeTransparent(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/stroke-transparent.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('stroke-dashoffset'), '', + 'stroke-dashoffset attribute not emptied when stroke opacity zero') + + +class RemoveStrokeWhenStrokeWidthZero(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/stroke-nowidth.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('stroke'), '', + 'stroke attribute not emptied when width zero') + + +class RemoveStrokeOpacityWhenStrokeWidthZero(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/stroke-nowidth.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('stroke-opacity'), '', + 'stroke-opacity attribute not emptied when width zero') + + +class RemoveStrokeLinecapWhenStrokeWidthZero(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/stroke-nowidth.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('stroke-linecap'), '', + 'stroke-linecap attribute not emptied when width zero') + + +class RemoveStrokeLinejoinWhenStrokeWidthZero(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/stroke-nowidth.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('stroke-linejoin'), '', + 'stroke-linejoin attribute not emptied when width zero') + + +class RemoveStrokeDasharrayWhenStrokeWidthZero(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/stroke-nowidth.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('stroke-dasharray'), '', + 'stroke-dasharray attribute not emptied when width zero') + + +class RemoveStrokeDashoffsetWhenStrokeWidthZero(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/stroke-nowidth.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('stroke-dashoffset'), '', + 'stroke-dashoffset attribute not emptied when width zero') + + +class RemoveStrokeWhenStrokeNone(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/stroke-none.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('stroke'), '', + 'stroke attribute not emptied when no stroke') + + +class KeepStrokeWhenInheritedFromParent(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/stroke-none.svg') + self.assertEqual(doc.getElementById('p1').getAttribute('stroke'), 'none', + 'stroke attribute removed despite a different value being inherited from a parent') + + +class KeepStrokeWhenInheritedByChild(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/stroke-none.svg') + self.assertEqual(doc.getElementById('g2').getAttribute('stroke'), 'none', + 'stroke attribute removed despite it being inherited by a child') + + +class RemoveStrokeWidthWhenStrokeNone(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/stroke-none.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('stroke-width'), '', + 'stroke-width attribute not emptied when no stroke') + + +class KeepStrokeWidthWhenInheritedByChild(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/stroke-none.svg') + self.assertEqual(doc.getElementById('g3').getAttribute('stroke-width'), '1px', + 'stroke-width attribute removed despite it being inherited by a child') + + +class RemoveStrokeOpacityWhenStrokeNone(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/stroke-none.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('stroke-opacity'), '', + 'stroke-opacity attribute not emptied when no stroke') + + +class RemoveStrokeLinecapWhenStrokeNone(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/stroke-none.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('stroke-linecap'), '', + 'stroke-linecap attribute not emptied when no stroke') + + +class RemoveStrokeLinejoinWhenStrokeNone(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/stroke-none.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('stroke-linejoin'), '', + 'stroke-linejoin attribute not emptied when no stroke') + + +class RemoveStrokeDasharrayWhenStrokeNone(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/stroke-none.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('stroke-dasharray'), '', + 'stroke-dasharray attribute not emptied when no stroke') + + +class RemoveStrokeDashoffsetWhenStrokeNone(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/stroke-none.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('stroke-dashoffset'), '', + 'stroke-dashoffset attribute not emptied when no stroke') + + +class RemoveFillRuleWhenFillNone(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/fill-none.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('fill-rule'), '', + 'fill-rule attribute not emptied when no fill') + + +class RemoveFillOpacityWhenFillNone(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/fill-none.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('fill-opacity'), '', + 'fill-opacity attribute not emptied when no fill') + + +class ConvertFillPropertyToAttr(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/fill-none.svg', + parse_args(['--disable-simplify-colors'])) + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'path')[1].getAttribute('fill'), 'black', + 'fill property not converted to XML attribute') + + +class ConvertFillOpacityPropertyToAttr(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/fill-none.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'path')[1].getAttribute('fill-opacity'), '.5', + 'fill-opacity property not converted to XML attribute') + + +class ConvertFillRuleOpacityPropertyToAttr(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/fill-none.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'path')[1].getAttribute('fill-rule'), 'evenodd', + 'fill-rule property not converted to XML attribute') + + +class CollapseSinglyReferencedGradients(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/collapse-gradients.svg') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'linearGradient')), 0, + 'Singly-referenced linear gradient not collapsed') + + +class InheritGradientUnitsUponCollapsing(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/collapse-gradients.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'radialGradient')[0].getAttribute('gradientUnits'), + 'userSpaceOnUse', + 'gradientUnits not properly inherited when collapsing gradients') + + +class OverrideGradientUnitsUponCollapsing(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/collapse-gradients-gradientUnits.svg') + self.assertEqual(doc.getElementsByTagNameNS(SVGNS, 'radialGradient')[0].getAttribute('gradientUnits'), '', + 'gradientUnits not properly overrode when collapsing gradients') + + +class DoNotCollapseMultiplyReferencedGradients(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/dont-collapse-gradients.svg') + self.assertNotEqual(len(doc.getElementsByTagNameNS(SVGNS, 'linearGradient')), 0, + 'Multiply-referenced linear gradient collapsed') + + +class PreserveXLinkHrefWhenCollapsingReferencedGradients(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/collapse-gradients-preserve-xlink-href.svg') + g1 = doc.getElementById("g1") + g2 = doc.getElementById("g2") + g3 = doc.getElementById("g3") + self.assertTrue(g1, 'g1 is still present') + self.assertTrue(g2 is None, 'g2 was removed') + self.assertTrue(g3, 'g3 is still present') + self.assertEqual(g3.getAttributeNS('http://www.w3.org/1999/xlink', 'href'), '#g1', + 'g3 has a xlink:href to g1') + + +class RemoveTrailingZerosFromPath(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/path-truncate-zeros.svg') + path = doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('d') + self.assertEqual(path[:4] == 'm300' and path[4] != '.', True, + 'Trailing zeros not removed from path data') + + +class RemoveTrailingZerosFromPathAfterCalculation(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/path-truncate-zeros-calc.svg') + path = doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('d') + self.assertEqual(path, 'm5.81 0h0.1', + 'Trailing zeros not removed from path data after calculation') + + +class RemoveDelimiterBeforeNegativeCoordsInPath(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/path-truncate-zeros.svg') + path = doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('d') + self.assertEqual(path[4], '-', + 'Delimiters not removed before negative coordinates in path data') + + +class UseScientificNotationToShortenCoordsInPath(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/path-use-scientific-notation.svg') + path = doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('d') + self.assertEqual(path, 'm1e4 0', + 'Not using scientific notation for path coord when representation is shorter') + + +class ConvertAbsoluteToRelativePathCommands(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/path-abs-to-rel.svg') + path = svg_parser.parse(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('d')) + self.assertEqual(path[1][0], 'v', + 'Absolute V command not converted to relative v command') + self.assertEqual(float(path[1][1][0]), -20.0, + 'Absolute V value not converted to relative v value') + + +class RoundPathData(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/path-precision.svg') + path = svg_parser.parse(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('d')) + self.assertEqual(float(path[0][1][0]), 100.0, + 'Not rounding down') + self.assertEqual(float(path[0][1][1]), 100.0, + 'Not rounding up') + + +class LimitPrecisionInPathData(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/path-precision.svg') + path = svg_parser.parse(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('d')) + self.assertEqual(float(path[1][1][0]), 100.01, + 'Not correctly limiting precision on path data') + + +class KeepPrecisionInPathDataIfSameLength(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/path-precision.svg', parse_args(['--set-precision=1'])) + paths = doc.getElementsByTagNameNS(SVGNS, 'path') + for path in paths[1:3]: + self.assertEqual(path.getAttribute('d'), "m1 21 321 4e3 5e4 7e5", + 'Precision not correctly reduced with "--set-precision=1" ' + 'for path with ID ' + path.getAttribute('id')) + self.assertEqual(paths[4].getAttribute('d'), "m-1-21-321-4e3 -5e4 -7e5", + 'Precision not correctly reduced with "--set-precision=1" ' + 'for path with ID ' + paths[4].getAttribute('id')) + self.assertEqual(paths[5].getAttribute('d'), "m123 101-123-101", + 'Precision not correctly reduced with "--set-precision=1" ' + 'for path with ID ' + paths[5].getAttribute('id')) + + doc = scourXmlFile('unittests/path-precision.svg', parse_args(['--set-precision=2'])) + paths = doc.getElementsByTagNameNS(SVGNS, 'path') + for path in paths[1:3]: + self.assertEqual(path.getAttribute('d'), "m1 21 321 4321 54321 6.5e5", + 'Precision not correctly reduced with "--set-precision=2" ' + 'for path with ID ' + path.getAttribute('id')) + self.assertEqual(paths[4].getAttribute('d'), "m-1-21-321-4321-54321-6.5e5", + 'Precision not correctly reduced with "--set-precision=2" ' + 'for path with ID ' + paths[4].getAttribute('id')) + self.assertEqual(paths[5].getAttribute('d'), "m123 101-123-101", + 'Precision not correctly reduced with "--set-precision=2" ' + 'for path with ID ' + paths[5].getAttribute('id')) + + doc = scourXmlFile('unittests/path-precision.svg', parse_args(['--set-precision=3'])) + paths = doc.getElementsByTagNameNS(SVGNS, 'path') + for path in paths[1:3]: + self.assertEqual(path.getAttribute('d'), "m1 21 321 4321 54321 654321", + 'Precision not correctly reduced with "--set-precision=3" ' + 'for path with ID ' + path.getAttribute('id')) + self.assertEqual(paths[4].getAttribute('d'), "m-1-21-321-4321-54321-654321", + 'Precision not correctly reduced with "--set-precision=3" ' + 'for path with ID ' + paths[4].getAttribute('id')) + self.assertEqual(paths[5].getAttribute('d'), "m123 101-123-101", + 'Precision not correctly reduced with "--set-precision=3" ' + 'for path with ID ' + paths[5].getAttribute('id')) + + doc = scourXmlFile('unittests/path-precision.svg', parse_args(['--set-precision=4'])) + paths = doc.getElementsByTagNameNS(SVGNS, 'path') + for path in paths[1:3]: + self.assertEqual(path.getAttribute('d'), "m1 21 321 4321 54321 654321", + 'Precision not correctly reduced with "--set-precision=4" ' + 'for path with ID ' + path.getAttribute('id')) + self.assertEqual(paths[4].getAttribute('d'), "m-1-21-321-4321-54321-654321", + 'Precision not correctly reduced with "--set-precision=4" ' + 'for path with ID ' + paths[4].getAttribute('id')) + self.assertEqual(paths[5].getAttribute('d'), "m123.5 101-123.5-101", + 'Precision not correctly reduced with "--set-precision=4" ' + 'for path with ID ' + paths[5].getAttribute('id')) + + +class LimitPrecisionInControlPointPathData(unittest.TestCase): + + def runTest(self): + path_data = ("m1.1 2.2 3.3 4.4m-4.4-6.7" + "c1 2 3 4 5.6 6.7 1 2 3 4 5.6 6.7 1 2 3 4 5.6 6.7m-17-20" + "s1 2 3.3 4.4 1 2 3.3 4.4 1 2 3.3 4.4m-10-13" + "q1 2 3.3 4.4 1 2 3.3 4.4 1 2 3.3 4.4") + doc = scourXmlFile('unittests/path-precision-control-points.svg', + parse_args(['--set-precision=2', '--set-c-precision=1'])) + path_data2 = doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('d') + self.assertEqual(path_data2, path_data, + 'Not correctly limiting precision on path data with --set-c-precision') + + +class RemoveEmptyLineSegmentsFromPath(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/path-line-optimize.svg') + path = svg_parser.parse(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('d')) + self.assertEqual(path[4][0], 'z', + 'Did not remove an empty line segment from path') + + +class RemoveEmptySegmentsFromPathWithButtLineCaps(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/path-with-caps.svg', parse_args(['--disable-style-to-xml'])) + for id in ['none', 'attr_butt', 'style_butt']: + path = svg_parser.parse(doc.getElementById(id).getAttribute('d')) + self.assertEqual(len(path), 1, + 'Did not remove empty segments when path had butt linecaps') + + +class DoNotRemoveEmptySegmentsFromPathWithRoundSquareLineCaps(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/path-with-caps.svg', parse_args(['--disable-style-to-xml'])) + for id in ['attr_round', 'attr_square', 'style_round', 'style_square']: + path = svg_parser.parse(doc.getElementById(id).getAttribute('d')) + self.assertEqual(len(path), 2, + 'Did remove empty segments when path had round or square linecaps') + + +class ChangeLineToHorizontalLineSegmentInPath(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/path-line-optimize.svg') + path = svg_parser.parse(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('d')) + self.assertEqual(path[1][0], 'h', + 'Did not change line to horizontal line segment in path') + self.assertEqual(float(path[1][1][0]), 200.0, + 'Did not calculate horizontal line segment in path correctly') + + +class ChangeLineToVerticalLineSegmentInPath(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/path-line-optimize.svg') + path = svg_parser.parse(doc.getElementsByTagNameNS(SVGNS, 'path')[0].getAttribute('d')) + self.assertEqual(path[2][0], 'v', + 'Did not change line to vertical line segment in path') + self.assertEqual(float(path[2][1][0]), 100.0, + 'Did not calculate vertical line segment in path correctly') + + +class ChangeBezierToShorthandInPath(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/path-bez-optimize.svg') + self.assertEqual(doc.getElementById('path1').getAttribute('d'), 'm10 100c50-50 50 50 100 0s50 50 100 0', + 'Did not change bezier curves into shorthand curve segments in path') + self.assertEqual(doc.getElementById('path2a').getAttribute('d'), 'm200 200s200 100 200 0', + 'Did not change bezier curve into shorthand curve segment when first control point ' + 'is the current point and previous command was not a bezier curve') + self.assertEqual(doc.getElementById('path2b').getAttribute('d'), 'm0 300s200-100 200 0c0 0 200 100 200 0', + 'Did change bezier curve into shorthand curve segment when first control point ' + 'is the current point but previous command was a bezier curve with a different control point') + + +class ChangeQuadToShorthandInPath(unittest.TestCase): + + def runTest(self): + path = scourXmlFile('unittests/path-quad-optimize.svg').getElementsByTagNameNS(SVGNS, 'path')[0] + self.assertEqual(path.getAttribute('d'), 'm10 100q50-50 100 0t100 0', + 'Did not change quadratic curves into shorthand curve segments in path') + + +class BooleanFlagsInEllipticalPath(unittest.TestCase): + + def test_omit_spaces(self): + doc = scourXmlFile('unittests/path-elliptical-flags.svg', parse_args(['--no-renderer-workaround'])) + paths = doc.getElementsByTagNameNS(SVGNS, 'path') + for path in paths: + self.assertEqual(path.getAttribute('d'), 'm0 0a100 50 0 00100 50', + 'Did not ommit spaces after boolean flags in elliptical arg path command') + + def test_output_spaces_with_renderer_workaround(self): + doc = scourXmlFile('unittests/path-elliptical-flags.svg', parse_args(['--renderer-workaround'])) + paths = doc.getElementsByTagNameNS(SVGNS, 'path') + for path in paths: + self.assertEqual(path.getAttribute('d'), 'm0 0a100 50 0 0 0 100 50', + 'Did not output spaces after boolean flags in elliptical arg path command ' + 'with renderer workaround') + + +class DoNotOptimzePathIfLarger(unittest.TestCase): + + def runTest(self): + p = scourXmlFile('unittests/path-no-optimize.svg').getElementsByTagNameNS(SVGNS, 'path')[0] + self.assertTrue(len(p.getAttribute('d')) <= + # this was the scoured path data as of 2016-08-31 without the length check in cleanPath(): + # d="m100 100l100.12 100.12c14.877 4.8766-15.123-5.1234-0.00345-0.00345z" + len("M100,100 L200.12345,200.12345 C215,205 185,195 200.12,200.12 Z"), + 'Made path data longer during optimization') + + +class HandleEncodingUTF8(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/encoding-utf8.svg') + text = u'Hello in many languages:\n' \ + u'ar: أهلا\n' \ + u'bn: হ্যালো\n' \ + u'el: Χαίρετε\n' \ + u'en: Hello\n' \ + u'hi: नमस्ते\n' \ + u'iw: שלום\n' \ + u'ja: こんにちは\n' \ + u'km: ជំរាបសួរ\n' \ + u'ml: ഹലോ\n' \ + u'ru: Здравствуйте\n' \ + u'ur: ہیلو\n' \ + u'zh: 您好' + desc = six.text_type(doc.getElementsByTagNameNS(SVGNS, 'desc')[0].firstChild.wholeText).strip() + self.assertEqual(desc, text, + 'Did not handle international UTF8 characters') + desc = six.text_type(doc.getElementsByTagNameNS(SVGNS, 'desc')[1].firstChild.wholeText).strip() + self.assertEqual(desc, u'“”‘’–—…‐‒°©®™•½¼¾⅓⅔†‡µ¢£€«»♠♣♥♦¿�', + 'Did not handle common UTF8 characters') + desc = six.text_type(doc.getElementsByTagNameNS(SVGNS, 'desc')[2].firstChild.wholeText).strip() + self.assertEqual(desc, u':-×÷±∞π∅≤≥≠≈∧∨∩∪∈∀∃∄∑∏←↑→↓↔↕↖↗↘↙↺↻⇒⇔', + 'Did not handle mathematical UTF8 characters') + desc = six.text_type(doc.getElementsByTagNameNS(SVGNS, 'desc')[3].firstChild.wholeText).strip() + self.assertEqual(desc, u'⁰¹²³⁴⁵⁶⁷⁸⁹⁺⁻⁽⁾ⁿⁱ₀₁₂₃₄₅₆₇₈₉₊₋₌₍₎', + 'Did not handle superscript/subscript UTF8 characters') + + +class HandleEncodingISO_8859_15(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/encoding-iso-8859-15.svg') + desc = six.text_type(doc.getElementsByTagNameNS(SVGNS, 'desc')[0].firstChild.wholeText).strip() + self.assertEqual(desc, u'áèîäöüß€ŠšŽžŒœŸ', 'Did not handle ISO 8859-15 encoded characters') + + +class HandleSciNoInPathData(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/path-sn.svg') + self.assertEqual(len(doc.getElementsByTagNameNS(SVGNS, 'path')), 1, + 'Did not handle scientific notation in path data') + + +class TranslateRGBIntoHex(unittest.TestCase): + + def runTest(self): + elem = scourXmlFile('unittests/color-formats.svg').getElementsByTagNameNS(SVGNS, 'rect')[0] + self.assertEqual(elem.getAttribute('fill'), '#0f1011', + 'Not converting rgb into hex') + + +class TranslateRGBPctIntoHex(unittest.TestCase): + + def runTest(self): + elem = scourXmlFile('unittests/color-formats.svg').getElementsByTagNameNS(SVGNS, 'stop')[0] + self.assertEqual(elem.getAttribute('stop-color'), '#7f0000', + 'Not converting rgb pct into hex') + + +class TranslateColorNamesIntoHex(unittest.TestCase): + + def runTest(self): + elem = scourXmlFile('unittests/color-formats.svg').getElementsByTagNameNS(SVGNS, 'rect')[0] + self.assertEqual(elem.getAttribute('stroke'), '#a9a9a9', + 'Not converting standard color names into hex') + + +class TranslateExtendedColorNamesIntoHex(unittest.TestCase): + + def runTest(self): + elem = scourXmlFile('unittests/color-formats.svg').getElementsByTagNameNS(SVGNS, 'solidColor')[0] + self.assertEqual(elem.getAttribute('solid-color'), '#fafad2', + 'Not converting extended color names into hex') + + +class TranslateLongHexColorIntoShortHex(unittest.TestCase): + + def runTest(self): + elem = scourXmlFile('unittests/color-formats.svg').getElementsByTagNameNS(SVGNS, 'ellipse')[0] + self.assertEqual(elem.getAttribute('fill'), '#fff', + 'Not converting long hex color into short hex') + + +class DoNotConvertShortColorNames(unittest.TestCase): + + def runTest(self): + elem = scourXmlFile('unittests/dont-convert-short-color-names.svg') \ + .getElementsByTagNameNS(SVGNS, 'rect')[0] + self.assertEqual('red', elem.getAttribute('fill'), + 'Converted short color name to longer hex string') + + +class AllowQuotEntitiesInUrl(unittest.TestCase): + + def runTest(self): + grads = scourXmlFile('unittests/quot-in-url.svg').getElementsByTagNameNS(SVGNS, 'linearGradient') + self.assertEqual(len(grads), 1, + 'Removed referenced gradient when " was in the url') + + +class RemoveFontStylesFromNonTextShapes(unittest.TestCase): + + def runTest(self): + r = scourXmlFile('unittests/font-styles.svg').getElementsByTagNameNS(SVGNS, 'rect')[0] + self.assertEqual(r.getAttribute('font-size'), '', + 'font-size not removed from rect') + + +class CollapseStraightPathSegments(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/collapse-straight-path-segments.svg', parse_args(['--disable-style-to-xml'])) + paths = doc.getElementsByTagNameNS(SVGNS, 'path') + path_data = [path.getAttribute('d') for path in paths] + path_data_expected = ['m0 0h30', + 'm0 0v30', + 'm0 0h10.5v10.5', + 'm0 0h10-1v10-1', + 'm0 0h30', + 'm0 0h30', + 'm0 0h10 20', + 'm0 0h10 20', + 'm0 0h10 20', + 'm0 0h10 20', + 'm0 0 20 40v1l10 20', + 'm0 0 10 10-20-20 10 10-20-20', + 'm0 0 1 2m1 2 2 4m1 2 2 4', + 'm6.3228 7.1547 81.198 45.258'] + + self.assertEqual(path_data[0:3], path_data_expected[0:3], + 'Did not collapse h/v commands into a single h/v commands') + self.assertEqual(path_data[3], path_data_expected[3], + 'Collapsed h/v commands with different direction') + self.assertEqual(path_data[4:6], path_data_expected[4:6], + 'Did not collapse h/v commands with only start/end markers present') + self.assertEqual(path_data[6:10], path_data_expected[6:10], + 'Did not preserve h/v commands with intermediate markers present') + + self.assertEqual(path_data[10], path_data_expected[10], + 'Did not collapse lineto commands into a single (implicit) lineto command') + self.assertEqual(path_data[11], path_data_expected[11], + 'Collapsed lineto commands with different direction') + self.assertEqual(path_data[12], path_data_expected[12], + 'Collapsed first parameter pair of a moveto subpath') + self.assertEqual(path_data[13], path_data_expected[13], + 'Did not collapse the nodes of a straight real world path') + + +class ConvertStraightCurvesToLines(unittest.TestCase): + + def runTest(self): + p = scourXmlFile('unittests/straight-curve.svg').getElementsByTagNameNS(SVGNS, 'path')[0] + self.assertEqual(p.getAttribute('d'), 'm10 10 40 40 40-40z', + 'Did not convert straight curves into lines') + + +class RemoveUnnecessaryPolygonEndPoint(unittest.TestCase): + + def runTest(self): + p = scourXmlFile('unittests/polygon.svg').getElementsByTagNameNS(SVGNS, 'polygon')[0] + self.assertEqual(p.getAttribute('points'), '50 50 150 50 150 150 50 150', + 'Unnecessary polygon end point not removed') + + +class DoNotRemovePolgonLastPoint(unittest.TestCase): + + def runTest(self): + p = scourXmlFile('unittests/polygon.svg').getElementsByTagNameNS(SVGNS, 'polygon')[1] + self.assertEqual(p.getAttribute('points'), '200 50 300 50 300 150 200 150', + 'Last point of polygon removed') + + +class ScourPolygonCoordsSciNo(unittest.TestCase): + + def runTest(self): + p = scourXmlFile('unittests/polygon-coord.svg').getElementsByTagNameNS(SVGNS, 'polygon')[0] + self.assertEqual(p.getAttribute('points'), '1e4 50', + 'Polygon coordinates not scoured') + + +class ScourPolylineCoordsSciNo(unittest.TestCase): + + def runTest(self): + p = scourXmlFile('unittests/polyline-coord.svg').getElementsByTagNameNS(SVGNS, 'polyline')[0] + self.assertEqual(p.getAttribute('points'), '1e4 50', + 'Polyline coordinates not scoured') + + +class ScourPolygonNegativeCoords(unittest.TestCase): + + def runTest(self): + p = scourXmlFile('unittests/polygon-coord-neg.svg').getElementsByTagNameNS(SVGNS, 'polygon')[0] + # points="100,-100,100-100,100-100-100,-100-100,200" /> + self.assertEqual(p.getAttribute('points'), '100 -100 100 -100 100 -100 -100 -100 -100 200', + 'Negative polygon coordinates not properly parsed') + + +class ScourPolylineNegativeCoords(unittest.TestCase): + + def runTest(self): + p = scourXmlFile('unittests/polyline-coord-neg.svg').getElementsByTagNameNS(SVGNS, 'polyline')[0] + self.assertEqual(p.getAttribute('points'), '100 -100 100 -100 100 -100 -100 -100 -100 200', + 'Negative polyline coordinates not properly parsed') + + +class ScourPolygonNegativeCoordFirst(unittest.TestCase): + + def runTest(self): + p = scourXmlFile('unittests/polygon-coord-neg-first.svg').getElementsByTagNameNS(SVGNS, 'polygon')[0] + # points="-100,-100,100-100,100-100-100,-100-100,200" /> + self.assertEqual(p.getAttribute('points'), '-100 -100 100 -100 100 -100 -100 -100 -100 200', + 'Negative polygon coordinates not properly parsed') + + +class ScourPolylineNegativeCoordFirst(unittest.TestCase): + + def runTest(self): + p = scourXmlFile('unittests/polyline-coord-neg-first.svg').getElementsByTagNameNS(SVGNS, 'polyline')[0] + self.assertEqual(p.getAttribute('points'), '-100 -100 100 -100 100 -100 -100 -100 -100 200', + 'Negative polyline coordinates not properly parsed') + + +class DoNotRemoveGroupsWithIDsInDefs(unittest.TestCase): + + def runTest(self): + f = scourXmlFile('unittests/important-groups-in-defs.svg') + self.assertEqual(len(f.getElementsByTagNameNS(SVGNS, 'linearGradient')), 1, + 'Group in defs with id\'ed element removed') + + +class AlwaysKeepClosePathSegments(unittest.TestCase): + + def runTest(self): + p = scourXmlFile('unittests/path-with-closepath.svg').getElementsByTagNameNS(SVGNS, 'path')[0] + self.assertEqual(p.getAttribute('d'), 'm10 10h100v100h-100z', + 'Path with closepath not preserved') + + +class RemoveDuplicateLinearGradients(unittest.TestCase): + + def runTest(self): + svgdoc = scourXmlFile('unittests/remove-duplicate-gradients.svg') + lingrads = svgdoc.getElementsByTagNameNS(SVGNS, 'linearGradient') + self.assertEqual(1, lingrads.length, + 'Duplicate linear gradient not removed') + + +class RereferenceForLinearGradient(unittest.TestCase): + + def runTest(self): + svgdoc = scourXmlFile('unittests/remove-duplicate-gradients.svg') + rects = svgdoc.getElementsByTagNameNS(SVGNS, 'rect') + self.assertEqual(rects[0].getAttribute('fill'), rects[1].getAttribute('stroke'), + 'Reference not updated after removing duplicate linear gradient') + self.assertEqual(rects[0].getAttribute('fill'), rects[4].getAttribute('fill'), + 'Reference not updated after removing duplicate linear gradient') + + +class RemoveDuplicateRadialGradients(unittest.TestCase): + + def runTest(self): + svgdoc = scourXmlFile('unittests/remove-duplicate-gradients.svg') + radgrads = svgdoc.getElementsByTagNameNS(SVGNS, 'radialGradient') + self.assertEqual(1, radgrads.length, + 'Duplicate radial gradient not removed') + + +class RemoveDuplicateRadialGradientsEnsureMasterHasID(unittest.TestCase): + + def runTest(self): + svgdoc = scourXmlFile('unittests/remove-duplicate-gradients-master-without-id.svg') + lingrads = svgdoc.getElementsByTagNameNS(SVGNS, 'linearGradient') + rect = svgdoc.getElementById('r1') + self.assertEqual(1, lingrads.length, + 'Duplicate linearGradient not removed') + self.assertEqual(lingrads[0].getAttribute("id"), "g1", + "linearGradient has a proper ID") + self.assertNotEqual(rect.getAttribute("fill"), "url(#)", + "linearGradient has a proper ID") + + +class RereferenceForRadialGradient(unittest.TestCase): + + def runTest(self): + svgdoc = scourXmlFile('unittests/remove-duplicate-gradients.svg') + rects = svgdoc.getElementsByTagNameNS(SVGNS, 'rect') + self.assertEqual(rects[2].getAttribute('stroke'), rects[3].getAttribute('fill'), + 'Reference not updated after removing duplicate radial gradient') + + +class RereferenceForGradientWithFallback(unittest.TestCase): + + def runTest(self): + svgdoc = scourXmlFile('unittests/remove-duplicate-gradients.svg') + rects = svgdoc.getElementsByTagNameNS(SVGNS, 'rect') + self.assertEqual(rects[0].getAttribute('fill') + ' #fff', rects[5].getAttribute('fill'), + 'Reference (with fallback) not updated after removing duplicate linear gradient') + + +class CollapseSamePathPoints(unittest.TestCase): + + def runTest(self): + p = scourXmlFile('unittests/collapse-same-path-points.svg').getElementsByTagNameNS(SVGNS, 'path')[0] + self.assertEqual(p.getAttribute('d'), "m100 100 100.12 100.12c14.877 4.8766-15.123-5.1234 0 0z", + 'Did not collapse same path points') + + +class ScourUnitlessLengths(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/scour-lengths.svg') + r = doc.getElementsByTagNameNS(SVGNS, 'rect')[0] + svg = doc.documentElement + self.assertEqual(svg.getAttribute('x'), '1', + 'Did not scour x attribute of svg element with unitless number') + self.assertEqual(r.getAttribute('x'), '123.46', + 'Did not scour x attribute of rect with unitless number') + self.assertEqual(r.getAttribute('y'), '123', + 'Did not scour y attribute of rect unitless number') + self.assertEqual(r.getAttribute('width'), '300', + 'Did not scour width attribute of rect with unitless number') + self.assertEqual(r.getAttribute('height'), '100', + 'Did not scour height attribute of rect with unitless number') + + +class ScourLengthsWithUnits(unittest.TestCase): + + def runTest(self): + r = scourXmlFile('unittests/scour-lengths.svg').getElementsByTagNameNS(SVGNS, 'rect')[1] + self.assertEqual(r.getAttribute('x'), '123.46px', + 'Did not scour x attribute with unit') + self.assertEqual(r.getAttribute('y'), '35ex', + 'Did not scour y attribute with unit') + self.assertEqual(r.getAttribute('width'), '300pt', + 'Did not scour width attribute with unit') + self.assertEqual(r.getAttribute('height'), '50%', + 'Did not scour height attribute with unit') + + +class RemoveRedundantSvgNamespaceDeclaration(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/redundant-svg-namespace.svg').documentElement + self.assertNotEqual(doc.getAttribute('xmlns:svg'), 'http://www.w3.org/2000/svg', + 'Redundant svg namespace declaration not removed') + + +class RemoveRedundantSvgNamespacePrefix(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/redundant-svg-namespace.svg').documentElement + r = doc.getElementsByTagNameNS(SVGNS, 'rect')[1] + self.assertEqual(r.tagName, 'rect', + 'Redundant svg: prefix not removed from rect') + t = doc.getElementsByTagNameNS(SVGNS, 'text')[0] + self.assertEqual(t.tagName, 'text', + 'Redundant svg: prefix not removed from text') + + # Regression test for #239 + self.assertEqual(t.getAttribute('xml:space'), 'preserve', + 'Required xml: prefix removed in error') + self.assertEqual(t.getAttribute("space"), '', + 'Required xml: prefix removed in error') + + +class RemoveDefaultGradX1Value(unittest.TestCase): + + def runTest(self): + g = scourXmlFile('unittests/gradient-default-attrs.svg').getElementById('grad1') + self.assertEqual(g.getAttribute('x1'), '', + 'x1="0" not removed') + + +class RemoveDefaultGradY1Value(unittest.TestCase): + + def runTest(self): + g = scourXmlFile('unittests/gradient-default-attrs.svg').getElementById('grad1') + self.assertEqual(g.getAttribute('y1'), '', + 'y1="0" not removed') + + +class RemoveDefaultGradX2Value(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/gradient-default-attrs.svg') + self.assertEqual(doc.getElementById('grad1').getAttribute('x2'), '', + 'x2="100%" not removed') + self.assertEqual(doc.getElementById('grad1b').getAttribute('x2'), '', + 'x2="1" not removed, ' + 'which is equal to the default x2="100%" when gradientUnits="objectBoundingBox"') + self.assertNotEqual(doc.getElementById('grad1c').getAttribute('x2'), '', + 'x2="1" removed, ' + 'which is NOT equal to the default x2="100%" when gradientUnits="userSpaceOnUse"') + + +class RemoveDefaultGradY2Value(unittest.TestCase): + + def runTest(self): + g = scourXmlFile('unittests/gradient-default-attrs.svg').getElementById('grad1') + self.assertEqual(g.getAttribute('y2'), '', + 'y2="0" not removed') + + +class RemoveDefaultGradGradientUnitsValue(unittest.TestCase): + + def runTest(self): + g = scourXmlFile('unittests/gradient-default-attrs.svg').getElementById('grad1') + self.assertEqual(g.getAttribute('gradientUnits'), '', + 'gradientUnits="objectBoundingBox" not removed') + + +class RemoveDefaultGradSpreadMethodValue(unittest.TestCase): + + def runTest(self): + g = scourXmlFile('unittests/gradient-default-attrs.svg').getElementById('grad1') + self.assertEqual(g.getAttribute('spreadMethod'), '', + 'spreadMethod="pad" not removed') + + +class RemoveDefaultGradCXValue(unittest.TestCase): + + def runTest(self): + g = scourXmlFile('unittests/gradient-default-attrs.svg').getElementById('grad2') + self.assertEqual(g.getAttribute('cx'), '', + 'cx="50%" not removed') + + +class RemoveDefaultGradCYValue(unittest.TestCase): + + def runTest(self): + g = scourXmlFile('unittests/gradient-default-attrs.svg').getElementById('grad2') + self.assertEqual(g.getAttribute('cy'), '', + 'cy="50%" not removed') + + +class RemoveDefaultGradRValue(unittest.TestCase): + + def runTest(self): + g = scourXmlFile('unittests/gradient-default-attrs.svg').getElementById('grad2') + self.assertEqual(g.getAttribute('r'), '', + 'r="50%" not removed') + + +class RemoveDefaultGradFXValue(unittest.TestCase): + + def runTest(self): + g = scourXmlFile('unittests/gradient-default-attrs.svg').getElementById('grad2') + self.assertEqual(g.getAttribute('fx'), '', + 'fx matching cx not removed') + + +class RemoveDefaultGradFYValue(unittest.TestCase): + + def runTest(self): + g = scourXmlFile('unittests/gradient-default-attrs.svg').getElementById('grad2') + self.assertEqual(g.getAttribute('fy'), '', + 'fy matching cy not removed') + + +class RemoveDefaultAttributeOrderSVGLengthCrash(unittest.TestCase): + + # Triggered a crash in v0.36 + def runTest(self): + try: + scourXmlFile('unittests/remove-default-attr-order.svg') + except AttributeError: + self.fail("Processing the order attribute triggered an AttributeError") + + +class RemoveDefaultAttributeStdDeviationSVGLengthCrash(unittest.TestCase): + + # Triggered a crash in v0.36 + def runTest(self): + try: + scourXmlFile('unittests/remove-default-attr-std-deviation.svg') + except AttributeError: + self.fail("Processing the order attribute triggered an AttributeError") + + +class CDATAInXml(unittest.TestCase): + + def runTest(self): + with open('unittests/cdata.svg') as f: + lines = scourString(f.read()).splitlines() + self.assertEqual(lines[3], + " alert('pb&j');", + 'CDATA did not come out correctly') + + +class WellFormedXMLLesserThanInAttrValue(unittest.TestCase): + + def runTest(self): + with open('unittests/xml-well-formed.svg') as f: + wellformed = scourString(f.read()) + self.assertTrue(wellformed.find('unicode="<"') != -1, + "Improperly serialized < in attribute value") + + +class WellFormedXMLAmpersandInAttrValue(unittest.TestCase): + + def runTest(self): + with open('unittests/xml-well-formed.svg') as f: + wellformed = scourString(f.read()) + self.assertTrue(wellformed.find('unicode="&"') != -1, + 'Improperly serialized & in attribute value') + + +class WellFormedXMLLesserThanInTextContent(unittest.TestCase): + + def runTest(self): + with open('unittests/xml-well-formed.svg') as f: + wellformed = scourString(f.read()) + self.assertTrue(wellformed.find('<title>2 < 5') != -1, + 'Improperly serialized < in text content') + + +class WellFormedXMLAmpersandInTextContent(unittest.TestCase): + + def runTest(self): + with open('unittests/xml-well-formed.svg') as f: + wellformed = scourString(f.read()) + self.assertTrue(wellformed.find('Peanut Butter & Jelly') != -1, + 'Improperly serialized & in text content') + + +class WellFormedXMLNamespacePrefixRemoveUnused(unittest.TestCase): + + def runTest(self): + with open('unittests/xml-well-formed.svg') as f: + wellformed = scourString(f.read()) + self.assertTrue(wellformed.find('xmlns:foo=') == -1, + 'Improperly serialized namespace prefix declarations: Unused namespace decaration not removed') + + +class WellFormedXMLNamespacePrefixKeepUsedElementPrefix(unittest.TestCase): + + def runTest(self): + with open('unittests/xml-well-formed.svg') as f: + wellformed = scourString(f.read()) + self.assertTrue(wellformed.find('xmlns:bar=') != -1, + 'Improperly serialized namespace prefix declarations: Used element prefix removed') + + +class WellFormedXMLNamespacePrefixKeepUsedAttributePrefix(unittest.TestCase): + + def runTest(self): + with open('unittests/xml-well-formed.svg') as f: + wellformed = scourString(f.read()) + self.assertTrue(wellformed.find('xmlns:baz=') != -1, + 'Improperly serialized namespace prefix declarations: Used attribute prefix removed') + + +class NamespaceDeclPrefixesInXMLWhenNotInDefaultNamespace(unittest.TestCase): + + def runTest(self): + with open('unittests/xml-ns-decl.svg') as f: + xmlstring = scourString(f.read()) + self.assertTrue(xmlstring.find('xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"') != -1, + 'Improperly serialized namespace prefix declarations when not in default namespace') + + +class MoveSVGElementsToDefaultNamespace(unittest.TestCase): + + def runTest(self): + with open('unittests/xml-ns-decl.svg') as f: + xmlstring = scourString(f.read()) + self.assertTrue(xmlstring.find(' does not inherit xml:space="preserve" of parent text element') + text = self.doc.getElementById('txt_c2') + self.assertIn('text1 text2', text.toxml(), + 'xml:space="default" of does not overwrite xml:space="preserve" of parent text element') + text = self.doc.getElementById('txt_c3') + self.assertIn('text1 text2', text.toxml(), + 'xml:space="preserve" of does not overwrite xml:space="default" of parent text element') + text = self.doc.getElementById('txt_c4') + self.assertIn('text1 text2', text.toxml(), + ' does not inherit xml:space="preserve" of parent group') + text = self.doc.getElementById('txt_c5') + self.assertIn('text1 text2', text.toxml(), + 'xml:space="default" of text element does not overwrite xml:space="preserve" of parent group') + text = self.doc.getElementById('txt_c6') + self.assertIn('text1 text2', text.toxml(), + 'xml:space="preserve" of text element does not overwrite xml:space="default" of parent group') + + def test_important_whitespace(self): + text = self.doc.getElementById('txt_d1') + self.assertIn('text1 text2', text.toxml(), + 'Newline with whitespace collapsed in text element') + text = self.doc.getElementById('txt_d2') + self.assertIn('text1 tspan1 text2', text.toxml(), + 'Whitespace stripped from the middle of a text element') + text = self.doc.getElementById('txt_d3') + self.assertIn('text1 tspan1 tspan2 text2', text.toxml(), + 'Whitespace stripped from the middle of a text element') + + def test_incorrect_whitespace(self): + text = self.doc.getElementById('txt_e1') + self.assertIn('text1text2', text.toxml(), + 'Whitespace introduced in text element with newline') + text = self.doc.getElementById('txt_e2') + self.assertIn('text1tspantext2', text.toxml(), + 'Whitespace introduced in text element with ') + text = self.doc.getElementById('txt_e3') + self.assertIn('text1tspantext2', text.toxml(), + 'Whitespace introduced in text element with and newlines') + + +class GetAttrPrefixRight(unittest.TestCase): + + def runTest(self): + grad = scourXmlFile('unittests/xml-namespace-attrs.svg') \ + .getElementsByTagNameNS(SVGNS, 'linearGradient')[1] + self.assertEqual(grad.getAttributeNS('http://www.w3.org/1999/xlink', 'href'), '#linearGradient841', + 'Did not get xlink:href prefix right') + + +class EnsurePreserveWhitespaceOnNonTextElements(unittest.TestCase): + + def runTest(self): + with open('unittests/no-collapse-lines.svg') as f: + s = scourString(f.read()) + self.assertEqual(len(s.splitlines()), 6, + 'Did not properly preserve whitespace on elements even if they were not textual') + + +class HandleEmptyStyleElement(unittest.TestCase): + + def runTest(self): + try: + styles = scourXmlFile('unittests/empty-style.svg').getElementsByTagNameNS(SVGNS, 'style') + fail = len(styles) != 1 + except AttributeError: + fail = True + self.assertEqual(fail, False, + 'Could not handle an empty style element') + + +class EnsureLineEndings(unittest.TestCase): + + def runTest(self): + with open('unittests/newlines.svg') as f: + s = scourString(f.read()) + self.assertEqual(len(s.splitlines()), 24, + 'Did handle reading or outputting line ending characters correctly') + + +class XmlEntities(unittest.TestCase): + + def runTest(self): + self.assertEqual(make_well_formed('<>&'), '<>&', + 'Incorrectly translated unquoted XML entities') + self.assertEqual(make_well_formed('<>&', XML_ENTS_ESCAPE_APOS), '<>&', + 'Incorrectly translated single-quoted XML entities') + self.assertEqual(make_well_formed('<>&', XML_ENTS_ESCAPE_QUOT), '<>&', + 'Incorrectly translated double-quoted XML entities') + + self.assertEqual(make_well_formed("'"), "'", + 'Incorrectly translated unquoted single quote') + self.assertEqual(make_well_formed('"'), '"', + 'Incorrectly translated unquoted double quote') + + self.assertEqual(make_well_formed("'", XML_ENTS_ESCAPE_QUOT), "'", + 'Incorrectly translated double-quoted single quote') + self.assertEqual(make_well_formed('"', XML_ENTS_ESCAPE_APOS), '"', + 'Incorrectly translated single-quoted double quote') + + self.assertEqual(make_well_formed("'", XML_ENTS_ESCAPE_APOS), ''', + 'Incorrectly translated single-quoted single quote') + self.assertEqual(make_well_formed('"', XML_ENTS_ESCAPE_QUOT), '"', + 'Incorrectly translated double-quoted double quote') + + +class HandleQuotesInAttributes(unittest.TestCase): + + def runTest(self): + with open('unittests/entities.svg', "rb") as f: + output = scourString(f.read()) + self.assertTrue('a="\'"' in output, + 'Failed on attribute value with non-double quote') + self.assertTrue("b='\"'" in output, + 'Failed on attribute value with non-single quote') + self.assertTrue("c=\"''"\"" in output, + 'Failed on attribute value with more single quotes than double quotes') + self.assertTrue('d=\'""'\'' in output, + 'Failed on attribute value with more double quotes than single quotes') + self.assertTrue("e=\"''""\"" in output, + 'Failed on attribute value with the same number of double quotes as single quotes') + + +class PreserveQuotesInStyles(unittest.TestCase): + + def runTest(self): + with open('unittests/quotes-in-styles.svg', "rb") as f: + output = scourString(f.read()) + self.assertTrue('use[id="t"]' in output, + 'Failed to preserve quote characters in a style element') + self.assertTrue("'Times New Roman'" in output, + 'Failed to preserve quote characters in a style attribute') + + +class DoNotStripCommentsOutsideOfRoot(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/comments.svg') + self.assertEqual(doc.childNodes.length, 4, + 'Did not include all comment children outside of root') + self.assertEqual(doc.childNodes[0].nodeType, 8, 'First node not a comment') + self.assertEqual(doc.childNodes[1].nodeType, 8, 'Second node not a comment') + self.assertEqual(doc.childNodes[3].nodeType, 8, 'Fourth node not a comment') + + +class DoNotStripDoctype(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/doctype.svg') + self.assertEqual(doc.childNodes.length, 3, + 'Did not include the DOCROOT') + self.assertEqual(doc.childNodes[0].nodeType, 8, 'First node not a comment') + self.assertEqual(doc.childNodes[1].nodeType, 10, 'Second node not a doctype') + self.assertEqual(doc.childNodes[2].nodeType, 1, 'Third node not the root node') + + +class PathImplicitLineWithMoveCommands(unittest.TestCase): + + def runTest(self): + path = scourXmlFile('unittests/path-implicit-line.svg').getElementsByTagNameNS(SVGNS, 'path')[0] + self.assertEqual(path.getAttribute('d'), "m100 100v100m200-100h-200m200 100v-100", + "Implicit line segments after move not preserved") + + +class RemoveTitlesOption(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/full-descriptive-elements.svg', + parse_args(['--remove-titles'])) + self.assertEqual(doc.childNodes.length, 1, + 'Did not remove tag with --remove-titles') + + +class RemoveDescriptionsOption(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/full-descriptive-elements.svg', + parse_args(['--remove-descriptions'])) + self.assertEqual(doc.childNodes.length, 1, + 'Did not remove <desc> tag with --remove-descriptions') + + +class RemoveMetadataOption(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/full-descriptive-elements.svg', + parse_args(['--remove-metadata'])) + self.assertEqual(doc.childNodes.length, 1, + 'Did not remove <metadata> tag with --remove-metadata') + + +class RemoveDescriptiveElementsOption(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/full-descriptive-elements.svg', + parse_args(['--remove-descriptive-elements'])) + self.assertEqual(doc.childNodes.length, 1, + 'Did not remove <title>, <desc> and <metadata> tags with --remove-descriptive-elements') + + +class EnableCommentStrippingOption(unittest.TestCase): + + def runTest(self): + with open('unittests/comment-beside-xml-decl.svg') as f: + docStr = f.read() + docStr = scourString(docStr, + parse_args(['--enable-comment-stripping'])) + self.assertEqual(docStr.find('<!--'), -1, + 'Did not remove document-level comment with --enable-comment-stripping') + + +class StripXmlPrologOption(unittest.TestCase): + + def runTest(self): + with open('unittests/comment-beside-xml-decl.svg') as f: + docStr = f.read() + docStr = scourString(docStr, + parse_args(['--strip-xml-prolog'])) + self.assertEqual(docStr.find('<?xml'), -1, + 'Did not remove <?xml?> with --strip-xml-prolog') + + +class ShortenIDsOption(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/shorten-ids.svg', + parse_args(['--shorten-ids'])) + gradientTag = doc.getElementsByTagName('linearGradient')[0] + self.assertEqual(gradientTag.getAttribute('id'), 'a', + "Did not shorten a linear gradient's ID with --shorten-ids") + rectTag = doc.getElementsByTagName('rect')[0] + self.assertEqual(rectTag.getAttribute('fill'), 'url(#a)', + 'Did not update reference to shortened ID') + + +class ShortenIDsStableOutput(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/shorten-ids-stable-output.svg', + parse_args(['--shorten-ids'])) + use_tags = doc.getElementsByTagName('use') + hrefs_ordered = [x.getAttributeNS('http://www.w3.org/1999/xlink', 'href') + for x in use_tags] + expected = ['#a', '#b', '#b'] + self.assertEqual(hrefs_ordered, expected, + '--shorten-ids pointlessly reassigned ids') + + +class MustKeepGInSwitch(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/groups-in-switch.svg') + self.assertEqual(doc.getElementsByTagName('g').length, 1, + 'Erroneously removed a <g> in a <switch>') + + +class MustKeepGInSwitch2(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/groups-in-switch-with-id.svg', + parse_args(['--enable-id-stripping'])) + self.assertEqual(doc.getElementsByTagName('g').length, 1, + 'Erroneously removed a <g> in a <switch>') + + +class GroupSiblingMerge(unittest.TestCase): + + def test_sibling_merge(self): + doc = scourXmlFile('unittests/group-sibling-merge.svg', + parse_args([])) + self.assertEqual(doc.getElementsByTagName('g').length, 5, + 'Merged sibling <g> tags with similar values') + + def test_sibling_merge_disabled(self): + doc = scourXmlFile('unittests/group-sibling-merge.svg', + parse_args(['--disable-group-collapsing'])) + self.assertEqual(doc.getElementsByTagName('g').length, 8, + 'Sibling merging is disabled by --disable-group-collapsing') + + def test_sibling_merge_crash(self): + doc = scourXmlFile('unittests/group-sibling-merge-crash.svg', + parse_args([''])) + self.assertEqual(doc.getElementsByTagName('g').length, 1, + 'Sibling merge should work without causing crashes') + + +class GroupCreation(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/group-creation.svg', + parse_args(['--create-groups'])) + self.assertEqual(doc.getElementsByTagName('g').length, 1, + 'Did not create a <g> for a run of elements having similar attributes') + + +class GroupCreationForInheritableAttributesOnly(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/group-creation.svg', + parse_args(['--create-groups'])) + self.assertEqual(doc.getElementsByTagName('g').item(0).getAttribute('y'), '', + 'Promoted the uninheritable attribute y to a <g>') + + +class GroupNoCreation(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/group-no-creation.svg', + parse_args(['--create-groups'])) + self.assertEqual(doc.getElementsByTagName('g').length, 0, + 'Created a <g> for a run of elements having dissimilar attributes') + + +class GroupNoCreationForTspan(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/group-no-creation-tspan.svg', + parse_args(['--create-groups'])) + self.assertEqual(doc.getElementsByTagName('g').length, 0, + 'Created a <g> for a run of <tspan>s ' + 'that are not allowed as children according to content model') + + +class DoNotCommonizeAttributesOnReferencedElements(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/commonized-referenced-elements.svg') + self.assertEqual(doc.getElementsByTagName('circle')[0].getAttribute('fill'), '#0f0', + 'Grouped an element referenced elsewhere into a <g>') + + +class DoNotRemoveOverflowVisibleOnMarker(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/overflow-marker.svg') + self.assertEqual(doc.getElementById('m1').getAttribute('overflow'), 'visible', + 'Removed the overflow attribute when it was not using the default value') + self.assertEqual(doc.getElementById('m2').getAttribute('overflow'), '', + 'Did not remove the overflow attribute when it was using the default value') + + +class DoNotRemoveOrientAutoOnMarker(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/orient-marker.svg') + self.assertEqual(doc.getElementById('m1').getAttribute('orient'), 'auto', + 'Removed the orient attribute when it was not using the default value') + self.assertEqual(doc.getElementById('m2').getAttribute('orient'), '', + 'Did not remove the orient attribute when it was using the default value') + + +class MarkerOnSvgElements(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/overflow-svg.svg') + self.assertEqual(doc.getElementsByTagName('svg')[0].getAttribute('overflow'), '', + 'Did not remove the overflow attribute when it was using the default value') + self.assertEqual(doc.getElementsByTagName('svg')[1].getAttribute('overflow'), '', + 'Did not remove the overflow attribute when it was using the default value') + self.assertEqual(doc.getElementsByTagName('svg')[2].getAttribute('overflow'), 'visible', + 'Removed the overflow attribute when it was not using the default value') + + +class GradientReferencedByStyleCDATA(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/style-cdata.svg') + self.assertEqual(len(doc.getElementsByTagName('linearGradient')), 1, + 'Removed a gradient referenced by an internal stylesheet') + + +class ShortenIDsInStyleCDATA(unittest.TestCase): + + def runTest(self): + with open('unittests/style-cdata.svg') as f: + docStr = f.read() + docStr = scourString(docStr, + parse_args(['--shorten-ids'])) + self.assertEqual(docStr.find('somethingreallylong'), -1, + 'Did not shorten IDs in the internal stylesheet') + + +class StyleToAttr(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/style-to-attr.svg') + line = doc.getElementsByTagName('line')[0] + self.assertEqual(line.getAttribute('stroke'), '#000') + self.assertEqual(line.getAttribute('marker-start'), 'url(#m)') + self.assertEqual(line.getAttribute('marker-mid'), 'url(#m)') + self.assertEqual(line.getAttribute('marker-end'), 'url(#m)') + + +class PathCommandRewrites(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/path-command-rewrites.svg') + paths = doc.getElementsByTagName('path') + expected_paths = [ + ('m100 100 200 100', "Trailing m0 0z not removed"), + ('m100 100v200m0 0 100 100z', "Mangled m0 0 100 100"), + ("m100 100v200m0 0 2-1-2 1z", "Should have removed empty m0 0"), + ("m100 100v200l3-5-5 3m0 0 2-1-2 1z", "Rewrite m0 0 3-5-5 3 ... -> l3-5-5 3 ..."), + ("m100 100v200m0 0 3-5-5 3zm0 0 2-1-2 1z", "No rewrite of m0 0 3-5-5 3z"), + ] + self.assertEqual(len(paths), len(expected_paths), "len(actual_paths) != len(expected_paths)") + for i in range(len(paths)): + actual_path = paths[i].getAttribute('d') + expected_path, message = expected_paths[i] + self.assertEqual(actual_path, + expected_path, + '%s: "%s" != "%s"' % (message, actual_path, expected_path)) + + +class DefaultsRemovalToplevel(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/cascading-default-attribute-removal.svg') + self.assertEqual(doc.getElementsByTagName('path')[1].getAttribute('fill-rule'), '', + 'Default attribute fill-rule:nonzero not removed') + + +class DefaultsRemovalToplevelInverse(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/cascading-default-attribute-removal.svg') + self.assertEqual(doc.getElementsByTagName('path')[0].getAttribute('fill-rule'), 'evenodd', + 'Non-Default attribute fill-rule:evenodd removed') + + +class DefaultsRemovalToplevelFormat(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/cascading-default-attribute-removal.svg') + self.assertEqual(doc.getElementsByTagName('path')[0].getAttribute('stroke-width'), '', + 'Default attribute stroke-width:1.00 not removed') + + +class DefaultsRemovalInherited(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/cascading-default-attribute-removal.svg') + self.assertEqual(doc.getElementsByTagName('path')[3].getAttribute('fill-rule'), '', + 'Default attribute fill-rule:nonzero not removed in child') + + +class DefaultsRemovalInheritedInverse(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/cascading-default-attribute-removal.svg') + self.assertEqual(doc.getElementsByTagName('path')[2].getAttribute('fill-rule'), 'evenodd', + 'Non-Default attribute fill-rule:evenodd removed in child') + + +class DefaultsRemovalInheritedFormat(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/cascading-default-attribute-removal.svg') + self.assertEqual(doc.getElementsByTagName('path')[2].getAttribute('stroke-width'), '', + 'Default attribute stroke-width:1.00 not removed in child') + + +class DefaultsRemovalOverwrite(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/cascading-default-attribute-removal.svg') + self.assertEqual(doc.getElementsByTagName('path')[5].getAttribute('fill-rule'), 'nonzero', + 'Default attribute removed, although it overwrites parent element') + + +class DefaultsRemovalOverwriteMarker(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/cascading-default-attribute-removal.svg') + self.assertEqual(doc.getElementsByTagName('path')[4].getAttribute('marker-start'), 'none', + 'Default marker attribute removed, although it overwrites parent element') + + +class DefaultsRemovalNonOverwrite(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/cascading-default-attribute-removal.svg') + self.assertEqual(doc.getElementsByTagName('path')[10].getAttribute('fill-rule'), '', + 'Default attribute not removed, although its parent used default') + + +class RemoveDefsWithUnreferencedElements(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/useless-defs.svg') + self.assertEqual(doc.getElementsByTagName('defs').length, 0, + 'Kept defs, although it contains only unreferenced elements') + + +class RemoveDefsWithWhitespace(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/whitespace-defs.svg') + self.assertEqual(doc.getElementsByTagName('defs').length, 0, + 'Kept defs, although it contains only whitespace or is <defs/>') + + +class TransformIdentityMatrix(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/transform-matrix-is-identity.svg') + self.assertEqual(doc.getElementsByTagName('line')[0].getAttribute('transform'), '', + 'Transform containing identity matrix not removed') + + +class TransformRotate135(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/transform-matrix-is-rotate-135.svg') + self.assertEqual(doc.getElementsByTagName('line')[0].getAttribute('transform'), 'rotate(135)', + 'Rotation matrix not converted to rotate(135)') + + +class TransformRotate45(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/transform-matrix-is-rotate-45.svg') + self.assertEqual(doc.getElementsByTagName('line')[0].getAttribute('transform'), 'rotate(45)', + 'Rotation matrix not converted to rotate(45)') + + +class TransformRotate90(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/transform-matrix-is-rotate-90.svg') + self.assertEqual(doc.getElementsByTagName('line')[0].getAttribute('transform'), 'rotate(90)', + 'Rotation matrix not converted to rotate(90)') + + +class TransformRotateCCW135(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/transform-matrix-is-rotate-225.svg') + self.assertEqual(doc.getElementsByTagName('line')[0].getAttribute('transform'), 'rotate(225)', + 'Counter-clockwise rotation matrix not converted to rotate(225)') + + +class TransformRotateCCW45(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/transform-matrix-is-rotate-neg-45.svg') + self.assertEqual(doc.getElementsByTagName('line')[0].getAttribute('transform'), 'rotate(-45)', + 'Counter-clockwise rotation matrix not converted to rotate(-45)') + + +class TransformRotateCCW90(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/transform-matrix-is-rotate-neg-90.svg') + self.assertEqual(doc.getElementsByTagName('line')[0].getAttribute('transform'), 'rotate(-90)', + 'Counter-clockwise rotation matrix not converted to rotate(-90)') + + +class TransformScale2by3(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/transform-matrix-is-scale-2-3.svg') + self.assertEqual(doc.getElementsByTagName('line')[0].getAttribute('transform'), 'scale(2 3)', + 'Scaling matrix not converted to scale(2 3)') + + +class TransformScaleMinus1(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/transform-matrix-is-scale-neg-1.svg') + self.assertEqual(doc.getElementsByTagName('line')[0].getAttribute('transform'), 'scale(-1)', + 'Scaling matrix not converted to scale(-1)') + + +class TransformTranslate(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/transform-matrix-is-translate.svg') + self.assertEqual(doc.getElementsByTagName('line')[0].getAttribute('transform'), 'translate(2 3)', + 'Translation matrix not converted to translate(2 3)') + + +class TransformRotationRange719_5(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/transform-rotate-trim-range-719.5.svg') + self.assertEqual(doc.getElementsByTagName('line')[0].getAttribute('transform'), 'rotate(-.5)', + 'Transform containing rotate(719.5) not shortened to rotate(-.5)') + + +class TransformRotationRangeCCW540_0(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/transform-rotate-trim-range-neg-540.0.svg') + self.assertEqual(doc.getElementsByTagName('line')[0].getAttribute('transform'), 'rotate(180)', + 'Transform containing rotate(-540.0) not shortened to rotate(180)') + + +class TransformRotation3Args(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/transform-rotate-fold-3args.svg') + self.assertEqual(doc.getElementsByTagName('line')[0].getAttribute('transform'), 'rotate(90)', + 'Optional zeroes in rotate(angle 0 0) not removed') + + +class TransformIdentityRotation(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/transform-rotate-is-identity.svg') + self.assertEqual(doc.getElementsByTagName('line')[0].getAttribute('transform'), '', + 'Transform containing identity rotation not removed') + + +class TransformIdentitySkewX(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/transform-skewX-is-identity.svg') + self.assertEqual(doc.getElementsByTagName('line')[0].getAttribute('transform'), '', + 'Transform containing identity X-axis skew not removed') + + +class TransformIdentitySkewY(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/transform-skewY-is-identity.svg') + self.assertEqual(doc.getElementsByTagName('line')[0].getAttribute('transform'), '', + 'Transform containing identity Y-axis skew not removed') + + +class TransformIdentityTranslate(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/transform-translate-is-identity.svg') + self.assertEqual(doc.getElementsByTagName('line')[0].getAttribute('transform'), '', + 'Transform containing identity translation not removed') + + +class TransformIdentityScale(unittest.TestCase): + + def runTest(self): + try: + doc = scourXmlFile('unittests/transform-scale-is-identity.svg') + except IndexError: + self.fail("scour failed to handled scale(1) [See GH#190]") + self.assertEqual(doc.getElementsByTagName('line')[0].getAttribute('scale'), '', + 'Transform containing identity translation not removed') + + +class DuplicateGradientsUpdateStyle(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/duplicate-gradients-update-style.svg', + parse_args(['--disable-style-to-xml'])) + gradient = doc.getElementsByTagName('linearGradient')[0] + rects = doc.getElementsByTagName('rect') + self.assertEqual('fill:url(#' + gradient.getAttribute('id') + ')', rects[0].getAttribute('style'), + 'Either of #duplicate-one or #duplicate-two was removed, ' + 'but style="fill:" was not updated to reflect this') + self.assertEqual('fill:url(#' + gradient.getAttribute('id') + ')', rects[1].getAttribute('style'), + 'Either of #duplicate-one or #duplicate-two was removed, ' + 'but style="fill:" was not updated to reflect this') + self.assertEqual('fill:url(#' + gradient.getAttribute('id') + ') #fff', rects[2].getAttribute('style'), + 'Either of #duplicate-one or #duplicate-two was removed, ' + 'but style="fill:" (with fallback) was not updated to reflect this') + + +class DocWithFlowtext(unittest.TestCase): + + def runTest(self): + with self.assertRaises(Exception): + scourXmlFile('unittests/flowtext.svg', + parse_args(['--error-on-flowtext'])) + + +class DocWithNoFlowtext(unittest.TestCase): + + def runTest(self): + try: + scourXmlFile('unittests/flowtext-less.svg', + parse_args(['--error-on-flowtext'])) + except Exception as e: + self.fail("exception '{}' was raised, and we didn't expect that!".format(e)) + + +class ParseStyleAttribute(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/style.svg') + self.assertEqual(doc.documentElement.getAttribute('style'), + 'property1:value1;property2:value2;property3:value3', + "Style attribute not properly parsed and/or serialized") + + +class StripXmlSpaceAttribute(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/xml-space.svg', + parse_args(['--strip-xml-space'])) + self.assertEqual(doc.documentElement.getAttribute('xml:space'), '', + "'xml:space' attribute not removed from root SVG element" + "when '--strip-xml-space' was specified") + self.assertNotEqual(doc.getElementById('text1').getAttribute('xml:space'), '', + "'xml:space' attribute removed from a child element " + "when '--strip-xml-space' was specified (should only operate on root SVG element)") + + +class DoNotStripXmlSpaceAttribute(unittest.TestCase): + + def runTest(self): + doc = scourXmlFile('unittests/xml-space.svg') + self.assertNotEqual(doc.documentElement.getAttribute('xml:space'), '', + "'xml:space' attribute removed from root SVG element" + "when '--strip-xml-space' was NOT specified") + self.assertNotEqual(doc.getElementById('text1').getAttribute('xml:space'), '', + "'xml:space' attribute removed from a child element " + "when '--strip-xml-space' was NOT specified (should never be removed!)") + + +class CommandLineUsage(unittest.TestCase): + + USAGE_STRING = "Usage: scour [INPUT.SVG [OUTPUT.SVG]] [OPTIONS]" + MINIMAL_SVG = '<?xml version="1.0" encoding="UTF-8"?>\n' \ + '<svg xmlns="http://www.w3.org/2000/svg"/>\n' + TEMP_SVG_FILE = 'testscour_temp.svg' + + # wrapper function for scour.run() to emulate command line usage + # + # returns an object with the following attributes: + # status: the exit status + # stdout: a string representing the combined output to 'stdout' + # stderr: a string representing the combined output to 'stderr' + def _run_scour(self): + class Result(object): + pass + + result = Result() + try: + run() + result.status = 0 + except SystemExit as exception: # catch any calls to sys.exit() + result.status = exception.code + result.stdout = self.temp_stdout.getvalue() + result.stderr = self.temp_stderr.getvalue() + + return result + + def setUp(self): + # store current values of 'argv', 'stdin', 'stdout' and 'stderr' + self.argv = sys.argv + self.stdin = sys.stdin + self.stdout = sys.stdout + self.stderr = sys.stderr + + # start with a fresh 'argv' + sys.argv = ['scour'] # TODO: Do we need a (more) valid 'argv[0]' for anything? + + # create 'stdin', 'stdout' and 'stderr' with behavior close to the original + # TODO: can we create file objects that behave *exactly* like the original? + # this is a mess since we have to ensure compatibility across Python 2 and 3 and it seems impossible + # to replicate all the details of 'stdin', 'stdout' and 'stderr' + class InOutBuffer(six.StringIO, object): + def write(self, string): + try: + return super(InOutBuffer, self).write(string) + except TypeError: + return super(InOutBuffer, self).write(string.decode()) + + sys.stdin = self.temp_stdin = InOutBuffer() + sys.stdout = self.temp_stdout = InOutBuffer() + sys.stderr = self.temp_stderr = InOutBuffer() + + self.temp_stdin.name = '<stdin>' # Scour wants to print the name of the input file... + + def tearDown(self): + # restore previous values of 'argv', 'stdin', 'stdout' and 'stderr' + sys.argv = self.argv + sys.stdin = self.stdin + sys.stdout = self.stdout + sys.stderr = self.stderr + + # clean up + self.temp_stdin.close() + self.temp_stdout.close() + self.temp_stderr.close() + + def test_no_arguments(self): + # we have to pretend that our input stream is a TTY, otherwise Scour waits for input from stdin + self.temp_stdin.isatty = lambda: True + + result = self._run_scour() + + self.assertEqual(result.status, 2, "Execution of 'scour' without any arguments should exit with status '2'") + self.assertTrue(self.USAGE_STRING in result.stderr, + "Usage information not displayed when calling 'scour' without any arguments") + + def test_version(self): + sys.argv.append('--version') + + result = self._run_scour() + + self.assertEqual(result.status, 0, "Execution of 'scour --version' erorred'") + self.assertEqual(__version__ + "\n", result.stdout, "Unexpected output of 'scour --version'") + + def test_help(self): + sys.argv.append('--help') + + result = self._run_scour() + + self.assertEqual(result.status, 0, "Execution of 'scour --help' erorred'") + self.assertTrue(self.USAGE_STRING in result.stdout and 'Options:' in result.stdout, + "Unexpected output of 'scour --help'") + + def test_stdin_stdout(self): + sys.stdin.write(self.MINIMAL_SVG) + sys.stdin.seek(0) + + result = self._run_scour() + + self.assertEqual(result.status, 0, "Usage of Scour via 'stdin' / 'stdout' erorred'") + self.assertEqual(result.stdout, self.MINIMAL_SVG, "Unexpected SVG output via 'stdout'") + + def test_filein_fileout_named(self): + sys.argv.extend(['-i', 'unittests/minimal.svg', '-o', self.TEMP_SVG_FILE]) + + result = self._run_scour() + + self.assertEqual(result.status, 0, "Usage of Scour with filenames specified as named parameters errored'") + with open(self.TEMP_SVG_FILE) as file: + file_content = file.read() + self.assertEqual(file_content, self.MINIMAL_SVG, "Unexpected SVG output in generated file") + os.remove(self.TEMP_SVG_FILE) + + def test_filein_fileout_positional(self): + sys.argv.extend(['unittests/minimal.svg', self.TEMP_SVG_FILE]) + + result = self._run_scour() + + self.assertEqual(result.status, 0, "Usage of Scour with filenames specified as positional parameters errored'") + with open(self.TEMP_SVG_FILE) as file: + file_content = file.read() + self.assertEqual(file_content, self.MINIMAL_SVG, "Unexpected SVG output in generated file") + os.remove(self.TEMP_SVG_FILE) + + def test_quiet(self): + sys.argv.append('-q') + sys.argv.extend(['-i', 'unittests/minimal.svg', '-o', self.TEMP_SVG_FILE]) + + result = self._run_scour() + os.remove(self.TEMP_SVG_FILE) + + self.assertEqual(result.status, 0, "Execution of 'scour -q ...' erorred'") + self.assertEqual(result.stdout, '', "Output writtent to 'stdout' when '--quiet' options was used") + self.assertEqual(result.stderr, '', "Output writtent to 'stderr' when '--quiet' options was used") + + def test_verbose(self): + sys.argv.append('-v') + sys.argv.extend(['-i', 'unittests/minimal.svg', '-o', self.TEMP_SVG_FILE]) + + result = self._run_scour() + os.remove(self.TEMP_SVG_FILE) + + self.assertEqual(result.status, 0, "Execution of 'scour -v ...' erorred'") + self.assertEqual(result.stdout.count('Number'), 14, + "Statistics output not as expected when '--verbose' option was used") + self.assertEqual(result.stdout.count(': 0'), 14, + "Statistics output not as expected when '--verbose' option was used") + + +class EmbedRasters(unittest.TestCase): + + # quick way to ping a host using the OS 'ping' command and return the execution result + def _ping(host): + import os + import platform + + # work around https://github.com/travis-ci/travis-ci/issues/3080 as pypy throws if 'ping' can't be executed + import distutils.spawn + if not distutils.spawn.find_executable('ping'): + return -1 + + system = platform.system().lower() + ping_count = '-n' if system == 'windows' else '-c' + dev_null = 'NUL' if system == 'windows' else '/dev/null' + + return os.system('ping ' + ping_count + ' 1 ' + host + ' > ' + dev_null) + + def test_disable_embed_rasters(self): + doc = scourXmlFile('unittests/raster-formats.svg', + parse_args(['--disable-embed-rasters'])) + self.assertEqual(doc.getElementById('png').getAttribute('xlink:href'), 'raster.png', + "Raster image embedded when '--disable-embed-rasters' was specified") + + def test_raster_formats(self): + doc = scourXmlFile('unittests/raster-formats.svg') + self.assertEqual(doc.getElementById('png').getAttribute('xlink:href'), + 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAMAAAABAgMAAABmjvwnAAAAC' + 'VBMVEUAAP//AAAA/wBmtfVOAAAACklEQVQI12NIAAAAYgBhGxZhsAAAAABJRU5ErkJggg==', + "Raster image (PNG) not correctly embedded.") + self.assertEqual(doc.getElementById('gif').getAttribute('xlink:href'), + 'data:image/gif;base64,R0lGODdhAwABAKEDAAAA//8AAAD/AP///ywAAAAAAwABAAACAoxQADs=', + "Raster image (GIF) not correctly embedded.") + self.assertEqual(doc.getElementById('jpg').getAttribute('xlink:href'), + 'data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEASABIAAD//gATQ3JlYXRlZCB3aXRoIEdJTVD/' + '2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/' + '2wBDAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/' + 'wAARCAABAAMDAREAAhEBAxEB/8QAFAABAAAAAAAAAAAAAAAAAAAACv/EABoQAAEFAQAAAAAAAAAAAAAAAAgABQc3d7j/' + 'xAAVAQEBAAAAAAAAAAAAAAAAAAAHCv/EABwRAAEDBQAAAAAAAAAAAAAAAAgAB7gJODl2eP/aAAwDAQACEQMRAD8AMeaF' + '/u2aj5z1Fqp7oN4rxx2kn5cPuhV6LkzG7qOyYL2r/9k=', + "Raster image (JPG) not correctly embedded.") + + def test_raster_paths_local(self): + doc = scourXmlFile('unittests/raster-paths-local.svg') + images = doc.getElementsByTagName('image') + for image in images: + href = image.getAttribute('xlink:href') + self.assertTrue(href.startswith('data:image/'), + "Raster image from local path '" + href + "' not embedded.") + + def test_raster_paths_local_absolute(self): + with open('unittests/raster-formats.svg', 'r') as f: + svg = f.read() + + # create a reference string by scouring the original file with relative links + options = ScourOptions + options.infilename = 'unittests/raster-formats.svg' + reference_svg = scourString(svg, options) + + # this will not always create formally valid paths but it'll check how robust our implementation is + # (the third path is invalid for sure because file: needs three slashes according to URI spec) + svg = svg.replace('raster.png', + '/' + os.path.abspath(os.path.dirname(__file__)) + '\\unittests\\raster.png') + svg = svg.replace('raster.gif', + 'file:///' + os.path.abspath(os.path.dirname(__file__)) + '/unittests/raster.gif') + svg = svg.replace('raster.jpg', + 'file:/' + os.path.abspath(os.path.dirname(__file__)) + '/unittests/raster.jpg') + + svg = scourString(svg) + + self.assertEqual(svg, reference_svg, + "Raster images from absolute local paths not properly embedded.") + + @unittest.skipIf(_ping('raw.githubusercontent.com') != 0, "Remote server not reachable.") + def test_raster_paths_remote(self): + doc = scourXmlFile('unittests/raster-paths-remote.svg') + images = doc.getElementsByTagName('image') + for image in images: + href = image.getAttribute('xlink:href') + self.assertTrue(href.startswith('data:image/'), + "Raster image from remote path '" + href + "' not embedded.") + + +class ViewBox(unittest.TestCase): + + def test_viewbox_create(self): + doc = scourXmlFile('unittests/viewbox-create.svg', parse_args(['--enable-viewboxing'])) + viewBox = doc.documentElement.getAttribute('viewBox') + self.assertEqual(viewBox, '0 0 123.46 654.32', "viewBox not properly created with '--enable-viewboxing'.") + + def test_viewbox_remove_width_and_height(self): + doc = scourXmlFile('unittests/viewbox-remove.svg', parse_args(['--enable-viewboxing'])) + width = doc.documentElement.getAttribute('width') + height = doc.documentElement.getAttribute('height') + self.assertEqual(width, '', "width not removed with '--enable-viewboxing'.") + self.assertEqual(height, '', "height not removed with '--enable-viewboxing'.") + + +# TODO: write tests for --keep-editor-data + +if __name__ == '__main__': + testcss = __import__('test_css') + scour = __import__('__main__') + suite = unittest.TestSuite(list(map(unittest.defaultTestLoader.loadTestsFromModule, [testcss, scour]))) + unittest.main(defaultTest="suite") diff --git a/tox.ini b/tox.ini new file mode 100644 index 0000000..82420b6 --- /dev/null +++ b/tox.ini @@ -0,0 +1,31 @@ +[tox] +envlist = + pypy + py27 + py34 + py35 + py36 + py37 + py38 + py39 + py310 + flake8 + + + +[testenv] +deps = + six + coverage + +commands = + scour --version + coverage run --parallel-mode --source=scour test_scour.py + + +[testenv:flake8] +deps = + flake8 + +commands = + flake8 --max-line-length=119 diff --git a/unittests/adobe.svg b/unittests/adobe.svg new file mode 100644 index 0000000..7dd7e73 --- /dev/null +++ b/unittests/adobe.svg @@ -0,0 +1,45 @@ +<?xml version="1.0" encoding="UTF-8" standalone="no"?> +<svg xmlns="http://www.w3.org/2000/svg" + xmlns:x="http://ns.adobe.com/Extensibility/1.0/" + xmlns:i="http://ns.adobe.com/AdobeIllustrator/10.0/" + xmlns:graph="http://ns.adobe.com/Graphs/1.0/" + xmlns:a="http://ns.adobe.com/AdobeSVGViewerExtensions/3.0/" + xmlns:f="http://ns.adobe.com/Flows/1.0/" + xmlns:ir="http://ns.adobe.com/ImageReplacement/1.0/" + xmlns:custom="http://ns.adobe.com/GenericCustomNamespace/1.0/" + xmlns:xpath="http://ns.adobe.com/XPath/1.0/" + xmlns:ok="A.namespace.we.want.left.in" + i:viewOrigin="190.2959 599.1841" i:rulerOrigin="0 0" i:pageBounds="0 792 612 0"> +<x:foo>bar</x:foo> +<i:foo>bar</i:foo> +<graph:foo>bar</graph:foo> +<a:foo>bar</a:foo> +<f:foo>bar</f:foo> +<ir:foo>bar</ir:foo> +<custom:foo>bar</custom:foo> +<xpath:foo>bar</xpath:foo> +<variableSets xmlns="http://ns.adobe.com/Variables/1.0/"> + <variableSet varSetName="binding1" locked="none"> + <variables/> + <v:sampleDataSets xmlns="http://ns.adobe.com/GenericCustomNamespace/1.0/" xmlns:v="http://ns.adobe.com/Variables/1.0/"/> + </variableSet> +</variableSets> +<sfw xmlns="http://ns.adobe.com/SaveForWeb/1.0/"> + <slices/> + <sliceSourceBounds y="191.664" x="190.296" width="225.72" height="407.52" bottomLeftOrigin="true"/> +</sfw> +<rect width="300" height="200" fill="green" + x:baz="1" + i:baz="1" + graph:baz="1" + a:baz="1" + f:baz="1" + ir:baz="1" + custom:baz='1' + xpath:baz="1" + xmlns:v="http://ns.adobe.Variables/1.0/" + v:baz="1" + xmlns:sfw="http://ns.adobe.com/SaveForWeb/1.0/" + sfw:baz="1" + ok:baz="1" /> +</svg> diff --git a/unittests/cascading-default-attribute-removal.svg b/unittests/cascading-default-attribute-removal.svg new file mode 100644 index 0000000..dbc3698 --- /dev/null +++ b/unittests/cascading-default-attribute-removal.svg @@ -0,0 +1,23 @@ +<?xml version="1.0" encoding="UTF-8" standalone="no"?> +<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"> + <path style="fill-rule:evenodd;stroke-linecap:butt;stroke-width:1.00;stroke:#000" d="m1,1z"/> + <path style="fill-rule:nonzero;stroke-linecap:butt;stroke:#000" d="m1,1z"/> + <g style="stroke:#f00;marker:none"> + <path style="marker-start:none;fill-rule:evenodd;stroke-linecap:butt" d="m1,1z"/> + <path style="fill-rule:nonzero" d="m1,1z"/> + <g style="fill:#f0f;text-anchor:stop;fill-rule:evenodd;stroke-linecap:round;marker:url(#nirvana)"> + <path style="marker-start:none;fill-rule:evenodd;stroke-linecap:butt" d="m1,1z"/> + <path style="color:#000;fill-rule:nonzero;" d="m1,1z"/> + <path d="m1,1z"/> + </g> + <g style="fill:#f0f;text-anchor:stop;fill-rule:evenodd;stroke-linecap:round;marker:url(#nirvana)"> + <path style="marker-start:none;fill-rule:evenodd;stroke-linecap:butt" d="m1,1z"/> + <path style="color:#000;fill-rule:nonzero;" d="m1,1z"/> + </g> + <g style="text-anchor:stop;fill-rule:nonzero;marker:none;stroke-linecap:butt"> + <path style="marker-start:none;fill-rule:evenodd;stroke-linecap:butt" d="m1,1z"/> + <path style="fill-rule:nonzero;" d="m1,1z"/> + <path d="m1,1z"/> + </g> + </g> +</svg> diff --git a/unittests/cdata.svg b/unittests/cdata.svg new file mode 100644 index 0000000..8ecb680 --- /dev/null +++ b/unittests/cdata.svg @@ -0,0 +1,6 @@ +<?xml version="1.0" encoding="UTF-8" standalone="no"?> +<svg xmlns="http://www.w3.org/2000/svg"> + <script type="application/ecmascript"><![CDATA[ + alert('pb&j'); + ]]></script> +</svg> diff --git a/unittests/collapse-gradients-gradientUnits.svg b/unittests/collapse-gradients-gradientUnits.svg new file mode 100644 index 0000000..76f6169 --- /dev/null +++ b/unittests/collapse-gradients-gradientUnits.svg @@ -0,0 +1,11 @@ +<?xml version="1.0" encoding="UTF-8" standalone="no"?> +<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"> +<defs> + <linearGradient id="g1" x1="0" y1="0" x2="1" y2="0" gradientUnits="userSpaceOnUse"> + <stop offset="0" stop-color="blue" /> + <stop offset="1" stop-color="yellow" /> + </linearGradient> + <radialGradient id="g2" xlink:href="#g1" cx="50%" cy="50%" r="30%" gradientUnits="objectBoundingBox"/> +</defs> +<rect fill="url(#g2)" width="200" height="200"/> +</svg> diff --git a/unittests/collapse-gradients-preserve-xlink-href.svg b/unittests/collapse-gradients-preserve-xlink-href.svg new file mode 100644 index 0000000..f736922 --- /dev/null +++ b/unittests/collapse-gradients-preserve-xlink-href.svg @@ -0,0 +1,13 @@ +<?xml version="1.0" encoding="UTF-8" standalone="no"?> +<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"> +<defs> + <linearGradient id="g1" x1="0" y1="0" x2="1" y2="0" gradientUnits="userSpaceOnUse"> + <stop offset="0" stop-color="blue" /> + <stop offset="1" stop-color="yellow" /> + </linearGradient> + <radialGradient id="g2" xlink:href="#g1" cx="100" cy="100" r="70"/> + <radialGradient id="g3" xlink:href="#g2" cx="100" cy="100" r="70"/> +</defs> +<rect fill="url(#g1)" width="200" height="200"/> +<rect fill="url(#g3)" width="200" height="200" y="200"/> +</svg> diff --git a/unittests/collapse-gradients.svg b/unittests/collapse-gradients.svg new file mode 100644 index 0000000..a45f962 --- /dev/null +++ b/unittests/collapse-gradients.svg @@ -0,0 +1,11 @@ +<?xml version="1.0" encoding="UTF-8" standalone="no"?> +<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"> +<defs> + <linearGradient id="grad1" x1="0" y1="0" x2="1" y2="0" gradientUnits="userSpaceOnUse" spreadMethod="reflect" gradientTransform="matrix(1,2,3,4,5,6)"> + <stop offset="0" stop-color="blue" /> + <stop offset="1" stop-color="yellow" /> + </linearGradient> + <radialGradient id="grad2" xlink:href="#grad1" cx="100" cy="100" r="70"/> +</defs> +<rect fill="url(#grad2)" width="200" height="200"/> +</svg> diff --git a/unittests/collapse-same-path-points.svg b/unittests/collapse-same-path-points.svg new file mode 100644 index 0000000..b05f4d1 --- /dev/null +++ b/unittests/collapse-same-path-points.svg @@ -0,0 +1,4 @@ +<?xml version="1.0" encoding="UTF-8" standalone="no"?> +<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="210" height="210"> + <path stroke="yellow" fill="red" d="M100,100 L200.12345,200.12345 C215,205 185,195 200.12345,200.12345 Z"/> +</svg> diff --git a/unittests/collapse-straight-path-segments.svg b/unittests/collapse-straight-path-segments.svg new file mode 100644 index 0000000..fa8e030 --- /dev/null +++ b/unittests/collapse-straight-path-segments.svg @@ -0,0 +1,33 @@ +<?xml version="1.0" encoding="UTF-8"?> +<svg width="100" height="100" xmlns="http://www.w3.org/2000/svg"> + <defs> + <marker id="dot"> + <circle r="5px"/> + </marker> + </defs> + + <!-- h/v commands should be collapsed into a single h/v commands --> + <path d="m0 0h10 20"/> + <path d="m0 0v10 20"/> + <path d="m0 0h10 0.5v10 0.5"/> + <!-- h/v commands should not be collapsed if they have different direction --> + <path d="m0 0h10 -1v10 -1"/> + <!-- h/v commands should also be collapsed if only start/end markers are present --> + <path d="m0 0h10 20" marker-start="url(#dot)" marker-end="url(#dot)"/> + <path d="m0 0h10 20" style="marker-start:url(#dot);marker-end:url(#dot)"/> + <!-- h/v commands should be preserved if intermediate markers are present --> + <path d="m0 0h10 20" marker="url(#dot)"/> + <path d="m0 0h10 20" marker-mid="url(#dot)"/> + <path d="m0 0h10 20" style="marker:url(#dot)"/> + <path d="m0 0h10 20" style="marker-mid:url(#dot)"/> + + <!-- all consecutive lineto commands pointing into the sam direction + should be collapsed into a single (implicit if possible) lineto command --> + <path d="m 0 0 l 10 20 0.25 0.5 l 0.75 1.5 l 5 10 0.2 0.4 l 3 6 0.8 1.6 l 0 1 l 1 2 9 18"/> + <!-- must not be collapsed (same slope, but different direction) --> + <path d="m 0 0 10 10 -20 -20 l 10 10 -20 -20"/> + <!-- first parameter pair of a moveto subpath must not be collapsed as it's not drawn on canvas --> + <path d="m0 0 1 2 m 1 2 1 2l 1 2 m 1 2 1 2 1 2"/> + <!-- real world example of straight path with multiple nodes --> + <path d="m 6.3227953,7.1547422 10.6709787,5.9477588 9.20334,5.129731 22.977448,12.807101 30.447251,16.970601 7.898986,4.402712"/> +</svg> diff --git a/unittests/color-formats.svg b/unittests/color-formats.svg new file mode 100644 index 0000000..0272c7e --- /dev/null +++ b/unittests/color-formats.svg @@ -0,0 +1,12 @@ +<?xml version="1.0" encoding="UTF-8" standalone="no"?> +<svg xmlns="http://www.w3.org/2000/svg" version="1.1"> +<defs> + <linearGradient id="g1" x1="0" y1="0" x2="1" y2="0"> + <stop offset="0.5" stop-color="rgb(50.0%, 0%, .0%)" /> + </linearGradient> + <solidColor id="c1" solid-color="lightgoldenrodyellow"/> +</defs> + <rect id="rect" width="100" height="100" fill="rgb(15,16,17)" stroke="darkgrey" /> + <circle id="circle" cx="100" cy="100" r="30" fill="url(#g1)" stroke="url(#c1)" /> + <ellipse id="ellipse" cx="100" cy="100" rx="30" ry="30" style="fill:#ffffff" fill="black" /> +</svg> diff --git a/unittests/comment-beside-xml-decl.svg b/unittests/comment-beside-xml-decl.svg new file mode 100644 index 0000000..cd3ecff --- /dev/null +++ b/unittests/comment-beside-xml-decl.svg @@ -0,0 +1,10 @@ +<?xml version="1.0" encoding="utf-8" standalone="yes"?> +<!-- Oh look a comment --> +<!-- generated by foobar version 20120503 --> +<!-- And another --> +<svg xmlns="http://www.w3.org/2000/svg"> + <!-- This comment is meant to test whether removing a comment before <svg> + messes up removing comments thereafter --> + <!-- And this one is meant to test whether iteration works correctly in + <svg> as well as the document element --> +</svg> diff --git a/unittests/comments.svg b/unittests/comments.svg new file mode 100644 index 0000000..06a75f2 --- /dev/null +++ b/unittests/comments.svg @@ -0,0 +1,6 @@ +<?xml version="1.0" ?> +<!-- Empty --> +<!-- Comment #2 --> +<svg xmlns="http://www.w3.org/2000/svg"> +</svg> +<!-- After --> diff --git a/unittests/commonized-referenced-elements.svg b/unittests/commonized-referenced-elements.svg new file mode 100644 index 0000000..3a152fb --- /dev/null +++ b/unittests/commonized-referenced-elements.svg @@ -0,0 +1,9 @@ +<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"> + <g id="g"> + <rect width="200" height="100" fill="#0f0"/> + <rect width="200" height="100" fill="#0f0"/> + <rect width="200" height="100" fill="#0f0"/> + <circle id="e" r="20" fill="#0f0"/> + </g> + <use xlink:href="#e" /> +</svg> diff --git a/unittests/css-reference.svg b/unittests/css-reference.svg new file mode 100644 index 0000000..6330c60 --- /dev/null +++ b/unittests/css-reference.svg @@ -0,0 +1,27 @@ +<?xml version="1.0" encoding="UTF-8" standalone="no"?> +<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"> + +<defs> + <linearGradient id="g1"> + <stop offset="0" stop-color="red"/> + <stop offset="1" stop-color="blue"/> + </linearGradient> + <linearGradient id="g2"> + <stop offset="0" stop-color="green"/> + <stop offset="1" stop-color="yellow"/> + </linearGradient> +</defs> +<style type="text/css"><![CDATA[ + rect { + stroke: red; + stroke-width: 10; + fill:url(#g1) + } +]]></style> + +<style type="text/css">.circ { fill: none; stroke: url("#g2"); stroke-width: 15 }</style> + +<rect height="300" width="300"/> +<circle class="circ" cx="350" cy="350" r="40"/> + +</svg> diff --git a/unittests/descriptive-elements-with-text.svg b/unittests/descriptive-elements-with-text.svg new file mode 100644 index 0000000..c991ddd --- /dev/null +++ b/unittests/descriptive-elements-with-text.svg @@ -0,0 +1,6 @@ +<?xml version="1.0" encoding="UTF-8" standalone="no"?> +<svg xmlns="http://www.w3.org/2000/svg"> + <title>This is a title element with only text node children + This is a desc element with only text node children + This is a metadata element with only text node children + diff --git a/unittests/doctype.svg b/unittests/doctype.svg new file mode 100644 index 0000000..d19e074 --- /dev/null +++ b/unittests/doctype.svg @@ -0,0 +1,7 @@ + + + + +]> + diff --git a/unittests/dont-collapse-gradients.svg b/unittests/dont-collapse-gradients.svg new file mode 100644 index 0000000..00b58f5 --- /dev/null +++ b/unittests/dont-collapse-gradients.svg @@ -0,0 +1,13 @@ + + + + + + + + + + + + + diff --git a/unittests/dont-convert-short-color-names.svg b/unittests/dont-convert-short-color-names.svg new file mode 100644 index 0000000..cbcece7 --- /dev/null +++ b/unittests/dont-convert-short-color-names.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/duplicate-gradient-stops-pct.svg b/unittests/duplicate-gradient-stops-pct.svg new file mode 100644 index 0000000..43c99c4 --- /dev/null +++ b/unittests/duplicate-gradient-stops-pct.svg @@ -0,0 +1,12 @@ + + + + + + + + + + + + diff --git a/unittests/duplicate-gradient-stops.svg b/unittests/duplicate-gradient-stops.svg new file mode 100644 index 0000000..4629bd6 --- /dev/null +++ b/unittests/duplicate-gradient-stops.svg @@ -0,0 +1,19 @@ + + + + + + + + + + + + + + + + + + + diff --git a/unittests/duplicate-gradients-update-style.svg b/unittests/duplicate-gradients-update-style.svg new file mode 100644 index 0000000..b18d7b9 --- /dev/null +++ b/unittests/duplicate-gradients-update-style.svg @@ -0,0 +1,16 @@ + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/unittests/empty-descriptive-elements.svg b/unittests/empty-descriptive-elements.svg new file mode 100644 index 0000000..2790084 --- /dev/null +++ b/unittests/empty-descriptive-elements.svg @@ -0,0 +1,5 @@ + + + + + diff --git a/unittests/empty-g.svg b/unittests/empty-g.svg new file mode 100644 index 0000000..ccb7355 --- /dev/null +++ b/unittests/empty-g.svg @@ -0,0 +1,7 @@ + + + + + + + diff --git a/unittests/empty-style.svg b/unittests/empty-style.svg new file mode 100644 index 0000000..a2d2afd --- /dev/null +++ b/unittests/empty-style.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/encoding-iso-8859-15.svg b/unittests/encoding-iso-8859-15.svg new file mode 100644 index 0000000..626aca4 --- /dev/null +++ b/unittests/encoding-iso-8859-15.svg @@ -0,0 +1,4 @@ + + + ߤ + diff --git a/unittests/encoding-utf8.svg b/unittests/encoding-utf8.svg new file mode 100644 index 0000000..dd63f12 --- /dev/null +++ b/unittests/encoding-utf8.svg @@ -0,0 +1,19 @@ + + + Hello in many languages: +ar: أهلا +bn: হ্যালো +el: Χαίρετε +en: Hello +hi: नमस्ते +iw: שלום +ja: こんにちは +km: ជំរាបសួរ +ml: ഹലോ +ru: Здравствуйте +ur: ہیلو +zh: 您好 + “”‘’–—…‐‒°©®™•½¼¾⅓⅔†‡µ¢£€«»♠♣♥♦¿� + :-×÷±∞π∅≤≥≠≈∧∨∩∪∈∀∃∄∑∏←↑→↓↔↕↖↗↘↙↺↻⇒⇔ + ⁰¹²³⁴⁵⁶⁷⁸⁹⁺⁻⁽⁾ⁿⁱ₀₁₂₃₄₅₆₇₈₉₊₋₌₍₎ + diff --git a/unittests/entities.svg b/unittests/entities.svg new file mode 100644 index 0000000..2308b46 --- /dev/null +++ b/unittests/entities.svg @@ -0,0 +1,8 @@ + + diff --git a/unittests/fill-none.svg b/unittests/fill-none.svg new file mode 100644 index 0000000..6442c90 --- /dev/null +++ b/unittests/fill-none.svg @@ -0,0 +1,5 @@ + + + + + diff --git a/unittests/flowtext-less.svg b/unittests/flowtext-less.svg new file mode 100644 index 0000000..eea559c --- /dev/null +++ b/unittests/flowtext-less.svg @@ -0,0 +1,66 @@ + + + + + + + + + + image/svg+xml + + + + + + + abcd + + diff --git a/unittests/flowtext.svg b/unittests/flowtext.svg new file mode 100644 index 0000000..9409b4f --- /dev/null +++ b/unittests/flowtext.svg @@ -0,0 +1,78 @@ + + + + + + + + + + image/svg+xml + + + + + + + sfdadasdasdasdadsa abcd + + diff --git a/unittests/font-styles.svg b/unittests/font-styles.svg new file mode 100644 index 0000000..e4120df --- /dev/null +++ b/unittests/font-styles.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/full-descriptive-elements.svg b/unittests/full-descriptive-elements.svg new file mode 100644 index 0000000..8decf2d --- /dev/null +++ b/unittests/full-descriptive-elements.svg @@ -0,0 +1,31 @@ + + + This is an example SVG file + Unit test for Scour's --remove-titles option + + + This is an example SVG file + Unit test for Scour's + --remove-descriptions option + + + + + + + No One + + + + + + diff --git a/unittests/gradient-default-attrs.svg b/unittests/gradient-default-attrs.svg new file mode 100644 index 0000000..25cdb82 --- /dev/null +++ b/unittests/gradient-default-attrs.svg @@ -0,0 +1,21 @@ + + + + + + + + + + + + + + + + + + + + + diff --git a/unittests/group-creation.svg b/unittests/group-creation.svg new file mode 100644 index 0000000..96776c0 --- /dev/null +++ b/unittests/group-creation.svg @@ -0,0 +1,6 @@ + + + + + + diff --git a/unittests/group-no-creation-tspan.svg b/unittests/group-no-creation-tspan.svg new file mode 100644 index 0000000..65f3803 --- /dev/null +++ b/unittests/group-no-creation-tspan.svg @@ -0,0 +1,8 @@ + + + + text1 + text2 + text3 + + diff --git a/unittests/group-no-creation.svg b/unittests/group-no-creation.svg new file mode 100644 index 0000000..bea6419 --- /dev/null +++ b/unittests/group-no-creation.svg @@ -0,0 +1,6 @@ + + + + + + diff --git a/unittests/group-sibling-merge-crash.svg b/unittests/group-sibling-merge-crash.svg new file mode 100644 index 0000000..3e50347 --- /dev/null +++ b/unittests/group-sibling-merge-crash.svg @@ -0,0 +1,13 @@ + + + + + + + + + + + + + diff --git a/unittests/group-sibling-merge.svg b/unittests/group-sibling-merge.svg new file mode 100644 index 0000000..c7f0d02 --- /dev/null +++ b/unittests/group-sibling-merge.svg @@ -0,0 +1,29 @@ + + + Produced by GNUPLOT 5.2 patchlevel 8 + + + + + 0 + + + + + + 5000 + + + + + + 10000 + + + + + + 15000 + + + diff --git a/unittests/groups-in-switch-with-id.svg b/unittests/groups-in-switch-with-id.svg new file mode 100644 index 0000000..317cfcc --- /dev/null +++ b/unittests/groups-in-switch-with-id.svg @@ -0,0 +1,18 @@ + + + + + + + + + + + + + + + + diff --git a/unittests/groups-in-switch.svg b/unittests/groups-in-switch.svg new file mode 100644 index 0000000..96394fd --- /dev/null +++ b/unittests/groups-in-switch.svg @@ -0,0 +1,18 @@ + + + + + + + + + + + + + + + + diff --git a/unittests/groups-with-title-desc.svg b/unittests/groups-with-title-desc.svg new file mode 100644 index 0000000..7983dc0 --- /dev/null +++ b/unittests/groups-with-title-desc.svg @@ -0,0 +1,13 @@ + + + + Group 1 + + + + + Group 1 + + + + diff --git a/unittests/ids-protect.svg b/unittests/ids-protect.svg new file mode 100644 index 0000000..9809209 --- /dev/null +++ b/unittests/ids-protect.svg @@ -0,0 +1,8 @@ + + + Text 1 + Text 2 + Text 3 + Text custom + My text + diff --git a/unittests/ids-to-strip.svg b/unittests/ids-to-strip.svg new file mode 100644 index 0000000..1ac59bc --- /dev/null +++ b/unittests/ids-to-strip.svg @@ -0,0 +1,11 @@ + + + + Fooey + + + + + + + diff --git a/unittests/ids.svg b/unittests/ids.svg new file mode 100644 index 0000000..b787343 --- /dev/null +++ b/unittests/ids.svg @@ -0,0 +1,12 @@ + + + + + + + + + + + + diff --git a/unittests/important-groups-in-defs.svg b/unittests/important-groups-in-defs.svg new file mode 100644 index 0000000..18ba1df --- /dev/null +++ b/unittests/important-groups-in-defs.svg @@ -0,0 +1,12 @@ + + + + + + + + + + + + diff --git a/unittests/inkscape.svg b/unittests/inkscape.svg new file mode 100644 index 0000000..a51ad49 --- /dev/null +++ b/unittests/inkscape.svg @@ -0,0 +1,7 @@ + + + + + diff --git a/unittests/minimal.svg b/unittests/minimal.svg new file mode 100644 index 0000000..b9d264c --- /dev/null +++ b/unittests/minimal.svg @@ -0,0 +1,2 @@ + + diff --git a/unittests/move-common-attributes-to-grandparent.svg b/unittests/move-common-attributes-to-grandparent.svg new file mode 100644 index 0000000..4e202bd --- /dev/null +++ b/unittests/move-common-attributes-to-grandparent.svg @@ -0,0 +1,10 @@ + + + + + + + + + + diff --git a/unittests/move-common-attributes-to-parent.svg b/unittests/move-common-attributes-to-parent.svg new file mode 100644 index 0000000..f390c89 --- /dev/null +++ b/unittests/move-common-attributes-to-parent.svg @@ -0,0 +1,13 @@ + + + + + + + +Hello +World! +Goodbye +Cruel World! + + diff --git a/unittests/nested-defs.svg b/unittests/nested-defs.svg new file mode 100644 index 0000000..7091985 --- /dev/null +++ b/unittests/nested-defs.svg @@ -0,0 +1,14 @@ + + + + + + + + + + + + + + diff --git a/unittests/nested-useless-groups.svg b/unittests/nested-useless-groups.svg new file mode 100644 index 0000000..73b5f88 --- /dev/null +++ b/unittests/nested-useless-groups.svg @@ -0,0 +1,9 @@ + + + + + + + + + diff --git a/unittests/newlines.svg b/unittests/newlines.svg new file mode 100644 index 0000000..a909603 --- /dev/null +++ b/unittests/newlines.svg @@ -0,0 +1,50 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/unittests/no-collapse-lines.svg b/unittests/no-collapse-lines.svg new file mode 100644 index 0000000..85da385 --- /dev/null +++ b/unittests/no-collapse-lines.svg @@ -0,0 +1,8 @@ + + + + + + diff --git a/unittests/orient-marker.svg b/unittests/orient-marker.svg new file mode 100644 index 0000000..19ecd19 --- /dev/null +++ b/unittests/orient-marker.svg @@ -0,0 +1,12 @@ + + + + + + + + + + + + diff --git a/unittests/overflow-marker.svg b/unittests/overflow-marker.svg new file mode 100644 index 0000000..ec068d9 --- /dev/null +++ b/unittests/overflow-marker.svg @@ -0,0 +1,12 @@ + + + + + + + + + + + + diff --git a/unittests/overflow-svg.svg b/unittests/overflow-svg.svg new file mode 100644 index 0000000..8830a80 --- /dev/null +++ b/unittests/overflow-svg.svg @@ -0,0 +1,9 @@ + + + + + + + + + diff --git a/unittests/path-abs-to-rel.svg b/unittests/path-abs-to-rel.svg new file mode 100644 index 0000000..c9cc803 --- /dev/null +++ b/unittests/path-abs-to-rel.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/path-bez-optimize.svg b/unittests/path-bez-optimize.svg new file mode 100644 index 0000000..30761f3 --- /dev/null +++ b/unittests/path-bez-optimize.svg @@ -0,0 +1,6 @@ + + + + + + diff --git a/unittests/path-command-rewrites.svg b/unittests/path-command-rewrites.svg new file mode 100644 index 0000000..47ddc61 --- /dev/null +++ b/unittests/path-command-rewrites.svg @@ -0,0 +1,8 @@ + + + + + + + + diff --git a/unittests/path-elliptical-flags.svg b/unittests/path-elliptical-flags.svg new file mode 100644 index 0000000..cdf13ba --- /dev/null +++ b/unittests/path-elliptical-flags.svg @@ -0,0 +1,7 @@ + + + + + + + diff --git a/unittests/path-implicit-line.svg b/unittests/path-implicit-line.svg new file mode 100644 index 0000000..a42848e --- /dev/null +++ b/unittests/path-implicit-line.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/path-line-optimize.svg b/unittests/path-line-optimize.svg new file mode 100644 index 0000000..13cc139 --- /dev/null +++ b/unittests/path-line-optimize.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/path-no-optimize.svg b/unittests/path-no-optimize.svg new file mode 100644 index 0000000..bda0fff --- /dev/null +++ b/unittests/path-no-optimize.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/path-precision-control-points.svg b/unittests/path-precision-control-points.svg new file mode 100644 index 0000000..add0f58 --- /dev/null +++ b/unittests/path-precision-control-points.svg @@ -0,0 +1,13 @@ + + + + diff --git a/unittests/path-precision.svg b/unittests/path-precision.svg new file mode 100644 index 0000000..9222ed3 --- /dev/null +++ b/unittests/path-precision.svg @@ -0,0 +1,11 @@ + + + + + + + + + + + diff --git a/unittests/path-quad-optimize.svg b/unittests/path-quad-optimize.svg new file mode 100644 index 0000000..bbe3bc9 --- /dev/null +++ b/unittests/path-quad-optimize.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/path-simple-triangle.svg b/unittests/path-simple-triangle.svg new file mode 100644 index 0000000..94ab17e --- /dev/null +++ b/unittests/path-simple-triangle.svg @@ -0,0 +1,8 @@ + + + + diff --git a/unittests/path-sn.svg b/unittests/path-sn.svg new file mode 100644 index 0000000..0b9f7d2 --- /dev/null +++ b/unittests/path-sn.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/path-truncate-zeros-calc.svg b/unittests/path-truncate-zeros-calc.svg new file mode 100644 index 0000000..c889fff --- /dev/null +++ b/unittests/path-truncate-zeros-calc.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/path-truncate-zeros.svg b/unittests/path-truncate-zeros.svg new file mode 100644 index 0000000..ad1c6d5 --- /dev/null +++ b/unittests/path-truncate-zeros.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/path-use-scientific-notation.svg b/unittests/path-use-scientific-notation.svg new file mode 100644 index 0000000..afbbf05 --- /dev/null +++ b/unittests/path-use-scientific-notation.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/path-with-caps.svg b/unittests/path-with-caps.svg new file mode 100644 index 0000000..3c24163 --- /dev/null +++ b/unittests/path-with-caps.svg @@ -0,0 +1,10 @@ + + + + + + + + + + diff --git a/unittests/path-with-closepath.svg b/unittests/path-with-closepath.svg new file mode 100644 index 0000000..80858ca --- /dev/null +++ b/unittests/path-with-closepath.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/polygon-coord-neg-first.svg b/unittests/polygon-coord-neg-first.svg new file mode 100644 index 0000000..9f87a3e --- /dev/null +++ b/unittests/polygon-coord-neg-first.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/polygon-coord-neg.svg b/unittests/polygon-coord-neg.svg new file mode 100644 index 0000000..73fe0b9 --- /dev/null +++ b/unittests/polygon-coord-neg.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/polygon-coord.svg b/unittests/polygon-coord.svg new file mode 100644 index 0000000..15940d4 --- /dev/null +++ b/unittests/polygon-coord.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/polygon.svg b/unittests/polygon.svg new file mode 100644 index 0000000..d927a00 --- /dev/null +++ b/unittests/polygon.svg @@ -0,0 +1,5 @@ + + + + + diff --git a/unittests/polyline-coord-neg-first.svg b/unittests/polyline-coord-neg-first.svg new file mode 100644 index 0000000..41d1981 --- /dev/null +++ b/unittests/polyline-coord-neg-first.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/polyline-coord-neg.svg b/unittests/polyline-coord-neg.svg new file mode 100644 index 0000000..da82dad --- /dev/null +++ b/unittests/polyline-coord-neg.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/polyline-coord.svg b/unittests/polyline-coord.svg new file mode 100644 index 0000000..fc209ed --- /dev/null +++ b/unittests/polyline-coord.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/protection.svg b/unittests/protection.svg new file mode 100644 index 0000000..f2930f5 --- /dev/null +++ b/unittests/protection.svg @@ -0,0 +1,11 @@ + + + + + + + + + + + \ No newline at end of file diff --git a/unittests/quot-in-url.svg b/unittests/quot-in-url.svg new file mode 100644 index 0000000..6d82567 --- /dev/null +++ b/unittests/quot-in-url.svg @@ -0,0 +1,10 @@ + + + + + + + + + + diff --git a/unittests/quotes-in-styles.svg b/unittests/quotes-in-styles.svg new file mode 100644 index 0000000..38a30f2 --- /dev/null +++ b/unittests/quotes-in-styles.svg @@ -0,0 +1,5 @@ + + + + + diff --git a/unittests/raster-formats.svg b/unittests/raster-formats.svg new file mode 100644 index 0000000..c31b65a --- /dev/null +++ b/unittests/raster-formats.svg @@ -0,0 +1,7 @@ + + + Three different formats + + + + \ No newline at end of file diff --git a/unittests/raster-paths-local.svg b/unittests/raster-paths-local.svg new file mode 100644 index 0000000..61db8ab --- /dev/null +++ b/unittests/raster-paths-local.svg @@ -0,0 +1,19 @@ + + + + Local files + + + + + + + + Local files (file: protocol) + + + + + + + \ No newline at end of file diff --git a/unittests/raster-paths-remote.svg b/unittests/raster-paths-remote.svg new file mode 100644 index 0000000..ede7783 --- /dev/null +++ b/unittests/raster-paths-remote.svg @@ -0,0 +1,8 @@ + + + + Files from internet + + + + \ No newline at end of file diff --git a/unittests/raster.gif b/unittests/raster.gif new file mode 100644 index 0000000..6ad1d03 Binary files /dev/null and b/unittests/raster.gif differ diff --git a/unittests/raster.jpg b/unittests/raster.jpg new file mode 100644 index 0000000..f2a3c4b Binary files /dev/null and b/unittests/raster.jpg differ diff --git a/unittests/raster.png b/unittests/raster.png new file mode 100644 index 0000000..81b33f6 Binary files /dev/null and b/unittests/raster.png differ diff --git a/unittests/redundant-svg-namespace.svg b/unittests/redundant-svg-namespace.svg new file mode 100644 index 0000000..1d1dd8d --- /dev/null +++ b/unittests/redundant-svg-namespace.svg @@ -0,0 +1,9 @@ + + + + + Test + + + Hallo World + diff --git a/unittests/referenced-elements-1.svg b/unittests/referenced-elements-1.svg new file mode 100644 index 0000000..e779080 --- /dev/null +++ b/unittests/referenced-elements-1.svg @@ -0,0 +1,11 @@ + + + + Fooey + + + + + + + diff --git a/unittests/referenced-font.svg b/unittests/referenced-font.svg new file mode 100644 index 0000000..7d992ec --- /dev/null +++ b/unittests/referenced-font.svg @@ -0,0 +1,17 @@ + + + + + + + + + + + + + Text + diff --git a/unittests/refs-in-defs.svg b/unittests/refs-in-defs.svg new file mode 100644 index 0000000..8636c5a --- /dev/null +++ b/unittests/refs-in-defs.svg @@ -0,0 +1,8 @@ + + + + + + + + diff --git a/unittests/remove-default-attr-order.svg b/unittests/remove-default-attr-order.svg new file mode 100644 index 0000000..506c9ce --- /dev/null +++ b/unittests/remove-default-attr-order.svg @@ -0,0 +1,11 @@ + + + + + + + + + + diff --git a/unittests/remove-default-attr-std-deviation.svg b/unittests/remove-default-attr-std-deviation.svg new file mode 100644 index 0000000..ba88368 --- /dev/null +++ b/unittests/remove-default-attr-std-deviation.svg @@ -0,0 +1,11 @@ + + + + + + + + + + diff --git a/unittests/remove-duplicate-gradients-master-without-id.svg b/unittests/remove-duplicate-gradients-master-without-id.svg new file mode 100644 index 0000000..66727e9 --- /dev/null +++ b/unittests/remove-duplicate-gradients-master-without-id.svg @@ -0,0 +1,17 @@ + + + + + + + + + + + + + + + + + diff --git a/unittests/remove-duplicate-gradients.svg b/unittests/remove-duplicate-gradients.svg new file mode 100644 index 0000000..d84c089 --- /dev/null +++ b/unittests/remove-duplicate-gradients.svg @@ -0,0 +1,24 @@ + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/unittests/remove-unused-attributes-on-parent.svg b/unittests/remove-unused-attributes-on-parent.svg new file mode 100644 index 0000000..7f68d15 --- /dev/null +++ b/unittests/remove-unused-attributes-on-parent.svg @@ -0,0 +1,8 @@ + + + + + + + + diff --git a/unittests/scour-lengths.svg b/unittests/scour-lengths.svg new file mode 100644 index 0000000..f5c0d5c --- /dev/null +++ b/unittests/scour-lengths.svg @@ -0,0 +1,5 @@ + + + + + diff --git a/unittests/shorten-ids-stable-output.svg b/unittests/shorten-ids-stable-output.svg new file mode 100644 index 0000000..6905ec1 --- /dev/null +++ b/unittests/shorten-ids-stable-output.svg @@ -0,0 +1,11 @@ + + + + + + + + + + + diff --git a/unittests/shorten-ids.svg b/unittests/shorten-ids.svg new file mode 100644 index 0000000..7852c57 --- /dev/null +++ b/unittests/shorten-ids.svg @@ -0,0 +1,10 @@ + + + + + + + + + + diff --git a/unittests/sodipodi.svg b/unittests/sodipodi.svg new file mode 100644 index 0000000..935884a --- /dev/null +++ b/unittests/sodipodi.svg @@ -0,0 +1,7 @@ + + + + + + diff --git a/unittests/straight-curve.svg b/unittests/straight-curve.svg new file mode 100644 index 0000000..95cd862 --- /dev/null +++ b/unittests/straight-curve.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/stroke-none.svg b/unittests/stroke-none.svg new file mode 100644 index 0000000..84f6c66 --- /dev/null +++ b/unittests/stroke-none.svg @@ -0,0 +1,16 @@ + + + + + + + + + + + + + + + + diff --git a/unittests/stroke-nowidth.svg b/unittests/stroke-nowidth.svg new file mode 100644 index 0000000..2ca5809 --- /dev/null +++ b/unittests/stroke-nowidth.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/stroke-transparent.svg b/unittests/stroke-transparent.svg new file mode 100644 index 0000000..4ff39a2 --- /dev/null +++ b/unittests/stroke-transparent.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/style-cdata.svg b/unittests/style-cdata.svg new file mode 100644 index 0000000..4740da9 --- /dev/null +++ b/unittests/style-cdata.svg @@ -0,0 +1,16 @@ + + + + + + + + + + + + diff --git a/unittests/style-to-attr.svg b/unittests/style-to-attr.svg new file mode 100644 index 0000000..3bbe3a0 --- /dev/null +++ b/unittests/style-to-attr.svg @@ -0,0 +1,9 @@ + + + + + + + + + diff --git a/unittests/style.svg b/unittests/style.svg new file mode 100644 index 0000000..2148103 --- /dev/null +++ b/unittests/style.svg @@ -0,0 +1,7 @@ + + diff --git a/unittests/transform-matrix-is-identity.svg b/unittests/transform-matrix-is-identity.svg new file mode 100644 index 0000000..9764b28 --- /dev/null +++ b/unittests/transform-matrix-is-identity.svg @@ -0,0 +1,3 @@ + + + diff --git a/unittests/transform-matrix-is-rotate-135.svg b/unittests/transform-matrix-is-rotate-135.svg new file mode 100644 index 0000000..a0583bc --- /dev/null +++ b/unittests/transform-matrix-is-rotate-135.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/transform-matrix-is-rotate-225.svg b/unittests/transform-matrix-is-rotate-225.svg new file mode 100644 index 0000000..1aa21ef --- /dev/null +++ b/unittests/transform-matrix-is-rotate-225.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/transform-matrix-is-rotate-45.svg b/unittests/transform-matrix-is-rotate-45.svg new file mode 100644 index 0000000..1749d98 --- /dev/null +++ b/unittests/transform-matrix-is-rotate-45.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/transform-matrix-is-rotate-90.svg b/unittests/transform-matrix-is-rotate-90.svg new file mode 100644 index 0000000..269d526 --- /dev/null +++ b/unittests/transform-matrix-is-rotate-90.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/transform-matrix-is-rotate-neg-45.svg b/unittests/transform-matrix-is-rotate-neg-45.svg new file mode 100644 index 0000000..37b46e8 --- /dev/null +++ b/unittests/transform-matrix-is-rotate-neg-45.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/transform-matrix-is-rotate-neg-90.svg b/unittests/transform-matrix-is-rotate-neg-90.svg new file mode 100644 index 0000000..8fbbd4f --- /dev/null +++ b/unittests/transform-matrix-is-rotate-neg-90.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/transform-matrix-is-scale-2-3.svg b/unittests/transform-matrix-is-scale-2-3.svg new file mode 100644 index 0000000..7a04ce5 --- /dev/null +++ b/unittests/transform-matrix-is-scale-2-3.svg @@ -0,0 +1,3 @@ + + + diff --git a/unittests/transform-matrix-is-scale-neg-1.svg b/unittests/transform-matrix-is-scale-neg-1.svg new file mode 100644 index 0000000..d402058 --- /dev/null +++ b/unittests/transform-matrix-is-scale-neg-1.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/transform-matrix-is-translate.svg b/unittests/transform-matrix-is-translate.svg new file mode 100644 index 0000000..0dfcd9d --- /dev/null +++ b/unittests/transform-matrix-is-translate.svg @@ -0,0 +1,3 @@ + + + diff --git a/unittests/transform-rotate-fold-3args.svg b/unittests/transform-rotate-fold-3args.svg new file mode 100644 index 0000000..0139610 --- /dev/null +++ b/unittests/transform-rotate-fold-3args.svg @@ -0,0 +1,5 @@ + + + + + diff --git a/unittests/transform-rotate-is-identity.svg b/unittests/transform-rotate-is-identity.svg new file mode 100644 index 0000000..198ba11 --- /dev/null +++ b/unittests/transform-rotate-is-identity.svg @@ -0,0 +1,3 @@ + + + diff --git a/unittests/transform-rotate-trim-range-719.5.svg b/unittests/transform-rotate-trim-range-719.5.svg new file mode 100644 index 0000000..f0bb947 --- /dev/null +++ b/unittests/transform-rotate-trim-range-719.5.svg @@ -0,0 +1,5 @@ + + + + + diff --git a/unittests/transform-rotate-trim-range-neg-540.0.svg b/unittests/transform-rotate-trim-range-neg-540.0.svg new file mode 100644 index 0000000..3e857f6 --- /dev/null +++ b/unittests/transform-rotate-trim-range-neg-540.0.svg @@ -0,0 +1,5 @@ + + + + + diff --git a/unittests/transform-scale-is-identity.svg b/unittests/transform-scale-is-identity.svg new file mode 100644 index 0000000..037d38a --- /dev/null +++ b/unittests/transform-scale-is-identity.svg @@ -0,0 +1,5 @@ + + + + + diff --git a/unittests/transform-skewX-is-identity.svg b/unittests/transform-skewX-is-identity.svg new file mode 100644 index 0000000..b038c6e --- /dev/null +++ b/unittests/transform-skewX-is-identity.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/transform-skewY-is-identity.svg b/unittests/transform-skewY-is-identity.svg new file mode 100644 index 0000000..27da015 --- /dev/null +++ b/unittests/transform-skewY-is-identity.svg @@ -0,0 +1,4 @@ + + + + diff --git a/unittests/transform-translate-is-identity.svg b/unittests/transform-translate-is-identity.svg new file mode 100644 index 0000000..6c62d23 --- /dev/null +++ b/unittests/transform-translate-is-identity.svg @@ -0,0 +1,5 @@ + + + + + diff --git a/unittests/unreferenced-defs.svg b/unittests/unreferenced-defs.svg new file mode 100644 index 0000000..2fd8a26 --- /dev/null +++ b/unittests/unreferenced-defs.svg @@ -0,0 +1,19 @@ + + + + + + + + + + + + + + + + + + + diff --git a/unittests/unreferenced-font.svg b/unittests/unreferenced-font.svg new file mode 100644 index 0000000..560c83f --- /dev/null +++ b/unittests/unreferenced-font.svg @@ -0,0 +1,17 @@ + + + + + + + + + + + + + Text + diff --git a/unittests/unreferenced-linearGradient.svg b/unittests/unreferenced-linearGradient.svg new file mode 100644 index 0000000..f588eac --- /dev/null +++ b/unittests/unreferenced-linearGradient.svg @@ -0,0 +1,6 @@ + + + + + + diff --git a/unittests/unreferenced-pattern.svg b/unittests/unreferenced-pattern.svg new file mode 100644 index 0000000..7bcff58 --- /dev/null +++ b/unittests/unreferenced-pattern.svg @@ -0,0 +1,6 @@ + + + + + + diff --git a/unittests/unreferenced-radialGradient.svg b/unittests/unreferenced-radialGradient.svg new file mode 100644 index 0000000..bfa35c8 --- /dev/null +++ b/unittests/unreferenced-radialGradient.svg @@ -0,0 +1,6 @@ + + + + + + diff --git a/unittests/useless-defs.svg b/unittests/useless-defs.svg new file mode 100644 index 0000000..f4663ff --- /dev/null +++ b/unittests/useless-defs.svg @@ -0,0 +1,21 @@ + + + + + + + + + + + + + + + + + + + + + diff --git a/unittests/viewbox-create.svg b/unittests/viewbox-create.svg new file mode 100644 index 0000000..0d250db --- /dev/null +++ b/unittests/viewbox-create.svg @@ -0,0 +1,3 @@ + + + diff --git a/unittests/viewbox-remove.svg b/unittests/viewbox-remove.svg new file mode 100644 index 0000000..8fa8307 --- /dev/null +++ b/unittests/viewbox-remove.svg @@ -0,0 +1,3 @@ + + + diff --git a/unittests/whitespace-defs.svg b/unittests/whitespace-defs.svg new file mode 100644 index 0000000..a32fcb4 --- /dev/null +++ b/unittests/whitespace-defs.svg @@ -0,0 +1,6 @@ + + + + + + diff --git a/unittests/whitespace.svg b/unittests/whitespace.svg new file mode 100644 index 0000000..2bb48a6 --- /dev/null +++ b/unittests/whitespace.svg @@ -0,0 +1,40 @@ + + + + text1 text2 + text1 text2 + text1 text2 + text1 text2 + text1 text2 + text1 text2 + + + text1 + text2 + text1 + text2 + text1 + text2 + + + text1 text2 + text1 text2 + text1 text2 + text1 text2 + text1 text2 + text1 text2 + + + text1 + text2 + text1 tspan1 text2 + text1 tspan1 tspan2 text2 + + + text1 +text2 + text1tspantext2 + text1 +tspan +text2 + diff --git a/unittests/xml-namespace-attrs.svg b/unittests/xml-namespace-attrs.svg new file mode 100644 index 0000000..81c5fb4 --- /dev/null +++ b/unittests/xml-namespace-attrs.svg @@ -0,0 +1,24 @@ + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/unittests/xml-ns-decl.svg b/unittests/xml-ns-decl.svg new file mode 100644 index 0000000..0f057a7 --- /dev/null +++ b/unittests/xml-ns-decl.svg @@ -0,0 +1,30 @@ + + + + + image/svg+xml + + Open Clip Art Logo + 10-01-2004 + + + Andreas Nilsson + + + + + + Jon Phillips, Tobias Jakobs + + + This is one version of the official Open Clip Art Library logo. + logo, open clip art library logo, logotype + + + + + + + + + diff --git a/unittests/xml-space.svg b/unittests/xml-space.svg new file mode 100644 index 0000000..88a9f50 --- /dev/null +++ b/unittests/xml-space.svg @@ -0,0 +1,4 @@ + + + Some random text. + \ No newline at end of file diff --git a/unittests/xml-well-formed.svg b/unittests/xml-well-formed.svg new file mode 100644 index 0000000..5c8d706 --- /dev/null +++ b/unittests/xml-well-formed.svg @@ -0,0 +1,11 @@ + + + + 2 < 5 + Peanut Butter & Jelly + + + + + ΉTML & CSS +