Initial vendor packages

Signed-off-by: Valentin Popov <valentin@popov.link>
This commit is contained in:
2024-01-08 01:21:28 +04:00
parent 5ecd8cf2cb
commit 1b6a04ca55
7309 changed files with 2160054 additions and 0 deletions

1
vendor/image/.cargo-checksum.json vendored Normal file

File diff suppressed because one or more lines are too long

582
vendor/image/CHANGES.md vendored Normal file
View File

@@ -0,0 +1,582 @@
# Release Notes
## Known issues
- Many decoders will panic on malicous input. In most cases, this is caused by
not enforcing memory limits, though other panics have been seen from fuzzing.
- The color space information of pixels is not clearly communicated.
## Changes
### Unreleased
- More convenient to use buffers will be added in the future. In particular,
improving initialization, passing of output buffers, and adding a more
complete representation for layouts. The plan is for these to interact with
the rest of the library through a byte-based interface similar to
`ImageDecoder`.
See ongoing work on [`image-canvas`](https://github.com/image-rs/canvas) if
you want to participate.
### Version 0.24.7
New features:
- Added `{ImageBuffer, DynamicImage}::write_with_encoder` to simplify writing
images with custom settings.
- Expose ICC profiles stored in tiff and wepb files.
- Added option to set the background color of animated webp images.
- New methods for sampling and interpolation of `GenericImageView`s
Bug fixes:
- Fix panic on empty dxt.
- Fix several panics in webp decoder.
- Allow unknown chunks at the end of webp files.
### Version 0.24.6
- Add support for QOI.
- ImageDecoders now expose ICC profiles on supported formats.
- Add support for BMPs without a file header.
- Improved AVIF encoder.
- WebP decoding fixes.
### Version 0.24.5
Structural changes:
- Increased the minimum supported Rust version (MSRV) to 1.61.
- Increased the version requirement for the `tiff` crate to 0.8.0.
- Increased the version requirement for the `jpeg` crate to 0.3.0.
Bug fixes:
- The `as_rgb32f` function of `DynamicImage` is now correctly documented.
- Fixed a crash when decoding ICO images. Added a regression test.
- Fixed a panic when transforming webp images. Added a regression test.
- Added a check to prevent integer overflow when calculating file size for BMP
images. The missing check could panic in debug mode or else set an incorrect
file size in release mode.
- Upgraded the PNG image encoder to use the newer `PngEncoder::write_image`
instead of the deprecated `PngEncoder::encode` which did not account for byte
order and could result in images with incorrect colors.
- Fixed `InsufficientMemory` error when trying to decode a PNG image.
- Fix warnings and CI issues.
- Typos and links in the documentation have been corrected.
Performance:
- Added check for dynamic image dimensions before resizing. This improves
performance in cases where the image does not need to be resized or has
already been resized.
### Version 0.24.4
New Features:
- Encoding for `webp` is now available with the native library. This needs to
be activate explicitly with the `web-encoder` feature.
- `exr` decoding has gained basic limit support.
Bug fixes:
- The `Iterator::size_hint` implementation of pixel iterators has been fixed to
return the current length indicated by its `ExactSizeIterator` hint.
- Typos and bad references in the documentation have been removed.
Performance:
- `ImageBuffer::get_pixel{,_mut}` is now marked inline.
- `resize` now short-circuits when image dimensions are unchanged.
### Version 0.24.3
New Features:
- `TiffDecoder` now supports setting resource limits.
Bug fixes:
- Fix compile issues on little endian systems.
- Various panics discovered by fuzzing.
### Version 0.24.2
Structural changes:
- CI now runs `cargo-deny`, checking dependent crates to an OSS license list
and against RUSTSEC advisories.
New Features:
- The WebP decoder recognizes and decodes images with `VP8X` header.
- The DDS decoder recognizes and decodes images with `DX10` headers.
Bug fixes:
- Calling `DynamicImage`/`ImageBuffer`'s methods `write_to` and `save` will now
work properly even if the backing container is larger than the image layout
requires. Only the relevant slice of pixel data is passed to the encoder.
- Fixed a OOM-panic caused by malformed images in the `gif` decoder.
### Version 0.24.1
Bug Fixes:
- ImageBuffer::get_pixel_checked would sometimes return the incorrect pixel.
- PNG encoding would sometimes not recognize unsupported color.
### Version 0.24.0
Breaking changes
Structural changes:
- Minimum Rust version is now `1.56` and may change in minor versions until
further notice. It is now tracked in the library's `Cargo.toml`, instead, by
the standard `[package.rust-version]` field. Note: this applies _to the
library itself_. You may need different version resolutions for dependencies
when using a non-stable version of Rust.
- The `math::utils::{nq, utils}` modules have been removed. These are better
served through the `color_quant` crate and the standard library respectively.
- All codecs are now available through `image::codecs`, no longer top-level.
- `ExtendedColorType` and `DynamicImage` have been made `#[non_exhaustive]`,
providing more methods instead of exhaustive matching.
- Reading images through the generic `io::Reader`, as well as generic
convenience interfaces, now requires the underlying reader to be `BufRead +
Seek`. This allows more efficient support more formats. Similarly, writing
now requires writers to be `Write + Seek`.
- The `Bgra*` variants of buffers, which were only half-supported, have been
removed. The owning buffer types `ImageBuffer` and `DynamicImage`
fundamentally already make a choice in supported pixel representations. This
allows for more consistent internal behavior. Callers are expected to convert
formats when using those buffers, which they are required to do in any case
already, and which is routinely performed by decoders.
Trait reworks:
- The `Pixel` trait is no longer implemented quite as liberally for structs
defined in the crate. Instead, it is now restricted to a set of known channel
which ensures accuracy in computations involving those channels.
- The `ImageDecoderExt` trait has been renamed to `ImageDecoderRect`, according
to its actual functionality.
- The `Pixel` trait and its `Subpixel` field no longer require (or provide) a
`'static` lifetime bound.
- The `Pixel` trait no longer requires specifying an associated, constant
`ColorType`. This was of little relevance to computation but made it much
harder to implement and extend correctly. Instead, the _private_
`PixelWithColorType` extension is added for interfaces that require a
properly known variant.
- Reworked how `SubImage` interacts with the `GenericImage` trait. It is now a
default implementation. Note that `SubImage` now has _inherent_ methods that
avoid double-indirection, the trait's method will no longer avoid this.
- The `Primitive` trait now requires implementations to provide a minimum and
maximum logical bound for the purpose of converting to other primitive
representations.
Additions
Image formats:
- Reading lossless WebP is now supported.
- The OpenEXR format is now supported.
- The `jpeg` decoder has been upgraded to Lossless JPEG.
- The `AvifEncoder` now correctly handles alpha-less images. Some additional
color formats are converted to RGBA as well.
- The `Bmp` codec now decodes more valid images. It can decode a raw image
without performing the palette mapping. It provides a method to access the
palette. The encoder provides the inverse capabilities.
- `Tiff` is now an output format.
Buffers and Operations:
- The channel / primitive type `f32` is now supported. Currently only the
OpenEXR codec makes full use of it but this is expected to change.
- `ImageBuffer::{get_pixel_checked, get_pixel_mut_checked}` provide panic-free
access to pixels and channels by returning `Option<&P>` and `Option<&mut P>`.
- `ImageBuffer::write_to` has been added, encoding the buffer to a writer. This
method already existed on `DynamicImage`.
- `DynamicImage` now implements `From<_>` for all supported buffer types.
- `DynamicImage` now implements `Default`, an empty `Rgba8` image.
- `imageops::overlay` now takes coordinates as `i64`.
Limits:
- Added `Limits` and `LimitSupport`, utilized in `io::Reader`. These can be
configured for rudimentary protection against resource exhaustion (images
pretending to require a very large buffer). These types are not yet
exhaustive by design, and more and stricter limits may be added in the
future.
- Encoders that do provide inherent support for limits, or reserve a
significant amount of internal memory, are urged to implement the
`set_limits` extension to `ImageDecoder`. Some strict limit are opt-in, which
may cause decoding to fail if not supported.
Miscellaneous:
- `PNMSubtype` has been renamed to `PnmSubtype`, by Rust's naming scheme.
- Several incorrectly capitalized `PNM*` aliases have been removed.
- Several `enum` types that had previously used a hidden variant now use the
official `#[non_exhaustive]` attribute instead.
### Version 0.23.14
- Unified gif blending in different decode methods, fixing out-of-bounds checks
in a number of weirdly positioned frames.
- Hardened TGA decoder against a number of malicious inputs.
- Fix forward incompatible usage of the panic macro.
- Fix load_rect for gif reaching `unreachable!()` code.
- Added `ExtendedColorType::A8`.
- Allow TGA to load alpha-only images.
- Optimized load_rect to avoid unnecessary seeks.
### Version 0.23.13
- Fix an inconsistency in supported formats of different methods for encoding
an image.
- Fix `thumbnail` choosing an empty image. It now always prefer non-empty image
dimensions.
- Fix integer overflow in calculating requires bytes for decoded image buffers
for farbfeld, hdr, and pnm decoders. These will now error early.
- Fix a panic decoding certain `jpeg` image without frames or meta data.
- Optimized the `jpeg` encoder.
- Optimized `GenericImage::copy_from` default impl in various cases.
- Add `avif` decoders. You must enable it explicitly and it is not covered by
our usual MSRV policy of Rust 1.34. Instead, only latest stable is supported.
- Add `ImageFormat::{can_read, can_write}`
- Add `Frame::buffer_mut`
- Add speed and quality options on `avif` encoder.
- Add speed parameter to `gif` encoder.
- Expose control over sequence repeat to the `gif` encoder.
- Add `{contrast,brighten,huerotate}_in_place` functions in imageproc.
- Relax `Default` impl of `ImageBuffer`, removing the bound on the color type.
- Derive Debug, Hash, PartialEq, Eq for DynamicImage
### Version 0.23.12
- Fix a soundness issue affecting the impls of `Pixel::from_slice_mut`. This
would previously reborrow the mutable input reference as a shared one but
then proceed to construct the mutable result reference from it. While UB
according to Rust's memory model, we're fairly certain that no miscompilation
can happen with the LLVM codegen in practice.
See 5cbe1e6767d11aff3f14c7ad69a06b04e8d583c7 for more details.
- Fix `imageops::blur` panicking when `sigma = 0.0`. It now defaults to `1.0`
as all negative values.
- Fix re-exporting `png::{CompressionType, FilterType}` to maintain SemVer
compatibility with the `0.23` releases.
- Add `ImageFormat::from_extension`
- Add copyless DynamicImage to byte slice/vec conversion.
- Add bit-depth specific `into_` and `to_` DynamicImage conversion methods.
### Version 0.23.11
- The `NeuQuant` implementation is now supplied by `color_quant`. Use of the
type defined by this library is discouraged.
- The `jpeg` decoder can now downscale images that are decoded by 1,2,4,8.
- Optimized the jpeg encoding ~5-15%.
- Deprecated the `clamp` function. Use `num-traits` instead.
- The ICO decoder now accepts an empty mask.
- Fixed an overflow in ICO mask decoding potentially leading to panic.
- Added `ImageOutputFormat` for `AVIF`
- Updated `tiff` to `0.6` with lzw performance improvements.
### Version 0.23.10
- Added AVIF encoding capabilities using the `ravif` crate. Please note that
the feature targets the latest stable compiler and is not enabled by default.
- Added `ImageBuffer::as_raw` to inspect the underlying container.
- Updated `gif` to `0.11` with large performance improvements.
### Version 0.23.9
- Introduced correctly capitalized aliases for some scream case types
- Introduced `imageops::{vertical_gradient, horizontal_gradient}` for writing
simple color gradients into an image.
- Sped up methods iterating over `Pixels`, `PixelsMut`, etc. by using exact
chunks internally. This should auto-vectorize `ImageBuffer::from_pixel`.
- Adjusted `Clone` impls of iterators to not require a bound on the pixel.
- Add `Debug` impls for iterators where the pixel's channel implements it.
- Add comparison impls for `FilterType`
### Version 0.23.8
- `flat::Error` now implements the standard `Error` trait
- The type parameter of `Map` has been relaxed to `?Sized`
- Added the `imageops::tile` function that repeats one image across another
### Version 0.23.7
- Iterators over immutable pixels of `ImageBuffer` can now be cloned
- Added a `tga` encoder
- Added `ColorMap::lookup`, an optional reversal of the map
- The `EncodableLayout` trait is now exported
### Version 0.23.6
- Added `png::ApngDecoder`, an adapter decoding the animation in an APNG.
- Fixed a bug in `jpeg` encoding that would darken output colors.
- Added a utility constructor `FlatSamples::with_monocolor`.
- Added `ImageBuffer::as_flat_samples_mut` which is a mutable variant of the
existing ffi-helper `ImageBuffer::as_flat_samples`.
### Version 0.23.5
- The `png` encoder now allows configuring compression and filter type. The
output is not part of stability guarantees, see its documentation.
- The `jpeg` encoder now accepts any implementor of `GenericImageView`. This
allows images that are only partially present in memory to be encoded.
- `ImageBuffer` now derives `Hash`, `PartialEq`, `Eq`.
- The `Pixels`/`PixelsMut` iterator no longer yields out-of-bounds pixels when
the underlying buffer is larger than required.
- The `pbm` decoder correctly decodes ascii data again, fixing a regression
where it would use the sample value `1` as white instead of `255`.
- Fix encoding of RGBA data in `gif` frames.
- Constructing a `Rows`/`RowsMut` iterator no longer panics when the image has
a width or height of `0`.
### Version 0.23.4
- Improved the performance of decoding animated gifs
- Added `crop_imm` which functions like `crop` but on a shared reference
- The gif `DisposalMethod::Any` is treated as `Keep`, consistent with browsers
- Most errors no longer allocate a string, instead implement Display.
- Add some implementations of `Error::source`
### Version 0.23.3
- Added `ColorType::has_alpha` to facilitate lossless conversion
- Recognize extended WebP formats for decoding
- Added decoding and encoding for the `farbfeld` format
- Export named iterator types created from various `ImageBuffer` methods
- Error in jpeg encoder for images larger than 65536 pixels, fixes panic
### Version 0.23.2
- The dependency on `jpeg-decoder` now reflects minimum requirements.
### Version 0.23.1
- Fix cmyk_to_rgb (jpeg) causing off by one rounding errors.
- A number of performance improvements for jpeg (encode and decode), bmp, vp8
- Added more details to errors for many formats
### Version 0.23.0
This major release intends to improve the interface with regards to handling of
color format data and errors for both decoding and encoding. This necessitated
many breaking changes anyways so it was used to improve the compliance to the
interface guidelines such as outstanding renaming.
It is not yet perfect with regards to color spaces but it was designed mainly
as an improvement over the current interface with regards to in-memory color
formats, first. We'll get to color spaces in a later major version.
- Heavily reworked `ColorType`:
- This type is now used for denoting formats for which we support operations
on buffers in these memory representations. Particularly, all channels in
pixel types are assumed to be an integer number of bytes (In terms of the
Rust type system, these are `Sized` and one can crate slices of channel
values).
- An `ExtendedColorType` is used to express more generic color formats for
which the library has limited support but can be converted/scaled/mapped
into a `ColorType` buffer. This operation might be fallible but, for
example, includes sources with 1/2/4-bit components.
- Both types are non-exhaustive to add more formats in a minor release.
- A work-in-progress (#1085) will further separate the color model from the
specific channel instantiation, e.g. both `8-bit RGB` and `16-bit BGR`
are instantiations of `RGB` color model.
- Heavily rework `ImageError`:
- The top-level enum type now serves to differentiate cause with multiple
opaque representations for the actual error. These are no longer simple
Strings but contains useful types. Third-party decoders that have no
variant in `ImageFormat` have also been considered.
- Support for `Error::source` that can be downcast to an error from a
matching version of the underlying decoders. Note that the version is not
part of the stable interface guarantees, this should not be relied upon
for correctness and only be used as an optimization.
- Added image format indications to errors.
- The error values produced by decoder will be upgraded incrementally. See
something that still produces plain old String messages? Feel free to
send a PR.
- Reworked the `ImageDecoder` trait:
- `read_image` takes an output buffer argument instead of allocating all
memory on its own.
- The return type of `dimensions` now aligns with `GenericImage` sizes.
- The `colortype` method was renamed to `color_type` for conformity.
- The enums `ColorType`, `DynamicImage`, `imageops::FilterType`, `ImageFormat`
no longer re-export all of their variants in the top-level of the crate. This
removes the growing pollution in the documentation and usage. You can still
insert the equivalent statement on your own:
`use image::ImageFormat::{self, *};`
- The result of `encode` operations is now uniformly an `ImageResult<()>`.
- Removed public converters from some `tiff`, `png`, `gif`, `jpeg` types,
mainly such as error conversion. This allows upgrading the dependency across
major versions without a major release in `image` itself.
- On that note, the public interface of `gif` encoder no longer takes a
`gif::Frame` but rather deals with `image::Frame` only. If you require to
specify the disposal method, transparency, etc. then you may want to wait
with upgrading but (see next change).
- The `gif` encoder now errors on invalid dimensions or unsupported color
formats. It would previously silently reinterpret bytes as RGB/RGBA.
- The capitalization of `ImageFormat` and other enum variants has been
adjusted to adhere to the API guidelines. These variants are now spelled
`Gif`, `Png`, etc. The same change has been made to the name of types such as
`HDRDecoder`.
- The `Progress` type has finally received public accessor method. Strange that
no one reported them missing.
- Introduced `PixelDensity` and `PixelDensityUnit` to store DPI information in
formats that support encoding this form of meta data (e.g. in `jpeg`).
### Version 0.22.5
- Added `GenericImage::copy_within`, specialized for `ImageBuffer`
- Fixed decoding of interlaced `gif` files
- Prepare for future compatibility of array `IntoIterator` in example code
### Version 0.22.4
- Added in-place variants for flip and rotate operations.
- The bmp encoder now checks if dimensions are valid for the format. It would
previously write a subset or panic.
- Removed deprecated implementations of `Error::description`
- Added `DynamicImage::into_*` which convert without an additional allocation.
- The PNG encoder errors on unsupported color types where it had previously
silently swapped color channels.
- Enabled saving images as `gif` with `save_buffer`.
### Version 0.22.3
- Added a new module `io` containing a configurable `Reader`. It can replace
the bunch of free functions: `image::{load_*, open, image_dimensions}` while
enabling new combinations such as `open` but with format deduced from content
instead of file path.
- Fixed `const_err` lint in the macro expanded implementations of `Pixel`. This
can only affect your crate if `image` is used as a path dependency.
### Version 0.22.2
- Undeprecate `unsafe` trait accessors. Further evaluation showed that their
deprecation should be delayed until trait `impl` specialization is available.
- Fixed magic bytes used to detect `tiff` images.
- Added `DynamicImage::from_decoder`.
- Fixed a bug in the `PNGReader` that caused an infinite loop.
- Added `ColorType::{bits_per_pixel, num_components}`.
- Added `ImageFormat::from_path`, same format deduction as the `open` method.
- Fixed a panic in the gif decoder.
- Aligned background color handling of `gif` to web browser implementations.
- Fixed handling of partial frames in animated `gif`.
- Removed unused direct `lzw` dependency, an indirect dependency in `tiff`.
### Version 0.22.1
- Fixed build without no features enabled
### Version 0.22
- The required Rust version is now `1.34.2`.
- Note the website and blog: [image-rs.org][1] and [blog.image-rs.org][2]
- `PixelMut` now only on `ImageBuffer` and removed from `GenericImage`
interface. Prefer iterating manually in the generic case.
- Replaced an unsafe interface in the hdr decoder with a safe variant.
- Support loading 2-bit BMP images
- Add method to save an `ImageBuffer`/`DynamicImage` with specified format
- Update tiff to `0.3` with a writer
- Update png to `0.15`, fixes reading of interlaced sub-byte pixels
- Always use custom struct for `ImageDecoder::Reader`
- Added `apply_without_alpha` and `map_without_alpha` to `Pixel` trait
- Pixel information now with associated constants instead of static methods
- Changed color structs to tuple types with single component. Improves
ergonomics of destructuring assignment and construction.
- Add lifetime parameter on `ImageDecoder` trait.
- Remove unnecessary `'static` bounds on affine operations
- Add function to retrieve image dimensions without loading full image
- Allow different image types in overlay and replace
- Iterators over rows of `ImageBuffer`, mutable variants
[1]: https://www.image-rs.org
[2]: https://blog.image-rs.org
### Version 0.21.2
- Fixed a variety of crashes and opaque errors in webp
- Updated the png limits to be less restrictive
- Reworked even more `unsafe` operations into safe alternatives
- Derived Debug on FilterType and Deref on Pixel
- Removed a restriction on DXT to always require power of two dimensions
- Change the encoding of RGBA in bmp using bitfields
- Corrected various urls
### Version 0.21.1
- A fairly important bugfix backport
- Fixed a potentially memory safety issue in the hdr and tiff decoders, see #885
- See [the full advisory](docs/2019-04-23-memory-unsafety.md) for an analysis
- Fixes `ImageBuffer` index calculation for very, very large images
- Fix some crashes while parsing specific incomplete pnm images
- Added comprehensive fuzzing for the pam image types
### Version 0.21
- Updated README to use `GenericImageView`
- Removed outdated version number from CHANGES
- Compiles now with wasm-unknown-emscripten target
- Restructured `ImageDecoder` trait
- Updated README with a more colorful example for the Julia fractal
- Use Rust 1.24.1 as minimum supported version
- Support for loading GIF frames one at a time with `animation::Frames`
- The TGA decoder now recognizes 32 bpp as RGBA(8)
- Fixed `to_bgra` document comment
- Added release test script
- Removed unsafe code blocks several places
- Fixed overlay overflow bug issues with documented proofs
### Version 0.20
- Clippy lint pass
- Updated num-rational dependency
- Added BGRA and BGR color types
- Improved performance of image resizing
- Improved PBM decoding
- PNM P4 decoding now returns bits instead of bytes
- Fixed move of overlapping buffers in BMP decoder
- Fixed some document comments
- `GenericImage` and `GenericImageView` is now object-safe
- Moved TIFF code to its own library
- Fixed README examples
- Fixed ordering of interpolated parameters in TIFF decode error string
- Thumbnail now handles upscaling
- GIF encoding for multiple frames
- Improved subimages API
- Cargo fmt fixes
### Version 0.19
- Fixed panic when blending with alpha zero.
- Made `save` consistent.
- Consistent size calculation.
- Fixed bug in `apply_with_alpha`.
- Implemented `TGADecoder::read_scanline`.
- Use deprecated attribute for `pixels_mut`.
- Fixed bug in JPEG grayscale encoding.
- Fixed multi image TIFF.
- PNM encoder.
- Added `#[derive(Hash)]` for `ColorType`.
- Use `num-derive` for `#[derive(FromPrimitive)]`.
- Added `into_frames` implementation for GIF.
- Made rayon an optional dependency.
- Fixed issue where resizing image did not give exact width/height.
- Improved downscale.
- Added a way to expose options when saving files.
- Fixed some compiler warnings.
- Switched to lzw crate instead of using built-in version.
- Added `ExactSizeIterator` implementations to buffer structs.
- Added `resize_to_fill` method.
- DXT encoding support.
- Applied clippy suggestions.
### Version 0.4
- Various improvements.
- Additional supported image formats (BMP and ICO).
- GIF and PNG codec moved into separate crates.
### Version 0.3
- Replace `std::old_io` with `std::io`.
### Version 0.2
- Support for interlaced PNG images.
- Writing support for GIF images (full color and paletted).
- Color quantizer that converts 32bit images to paletted including the alpha channel.
- Initial support for reading TGA images.
- Reading support for TIFF images (packbits and FAX compression not supported).
- Various bug fixes and improvements.
### Version 0.1
- Initial release
- Basic reading support for png, jpeg, gif, ppm and webp.
- Basic writing support for png and jpeg.
- A collection of basic imaging processing function like `blur` or `invert`

2311
vendor/image/Cargo.lock.msrv vendored Normal file

File diff suppressed because it is too large Load Diff

188
vendor/image/Cargo.toml vendored Normal file
View File

@@ -0,0 +1,188 @@
# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO
#
# When uploading crates to the registry Cargo will automatically
# "normalize" Cargo.toml files for maximal compatibility
# with all versions of Cargo and also rewrite `path` dependencies
# to registry (e.g., crates.io) dependencies.
#
# If you are reading this file be aware that the original Cargo.toml
# will likely look very different (and much more reasonable).
# See Cargo.toml.orig for the original contents.
[package]
edition = "2018"
rust-version = "1.61.0"
name = "image"
version = "0.24.7"
authors = ["The image-rs Developers"]
exclude = [
"src/png/testdata/*",
"examples/*",
"tests/*",
]
description = "Imaging library. Provides basic image processing and encoders/decoders for common image formats."
homepage = "https://github.com/image-rs/image"
documentation = "https://docs.rs/image"
readme = "README.md"
categories = [
"multimedia::images",
"multimedia::encoding",
]
license = "MIT"
repository = "https://github.com/image-rs/image"
resolver = "2"
[lib]
name = "image"
path = "./src/lib.rs"
[[bench]]
name = "decode"
path = "benches/decode.rs"
harness = false
[[bench]]
name = "encode"
path = "benches/encode.rs"
harness = false
[[bench]]
name = "copy_from"
harness = false
[dependencies.bytemuck]
version = "1.7.0"
features = ["extern_crate_alloc"]
[dependencies.byteorder]
version = "1.3.2"
[dependencies.color_quant]
version = "1.1"
[dependencies.dav1d]
version = "0.6.0"
optional = true
[dependencies.dcv-color-primitives]
version = "0.4.0"
optional = true
[dependencies.exr]
version = "1.5.0"
optional = true
[dependencies.gif]
version = "0.12"
optional = true
[dependencies.jpeg]
version = "0.3.0"
optional = true
default-features = false
package = "jpeg-decoder"
[dependencies.libwebp]
version = "0.2.2"
optional = true
default-features = false
package = "webp"
[dependencies.mp4parse]
version = "0.17.0"
optional = true
[dependencies.num-rational]
version = "0.4"
default-features = false
[dependencies.num-traits]
version = "0.2.0"
[dependencies.png]
version = "0.17.6"
optional = true
[dependencies.qoi]
version = "0.4"
optional = true
[dependencies.ravif]
version = "0.11.0"
optional = true
[dependencies.rgb]
version = "0.8.25"
optional = true
[dependencies.tiff]
version = "0.9.0"
optional = true
[dev-dependencies.crc32fast]
version = "1.2.0"
[dev-dependencies.criterion]
version = "0.4"
[dev-dependencies.glob]
version = "0.3"
[dev-dependencies.jpeg]
version = "0.3.0"
features = ["platform_independent"]
default-features = false
package = "jpeg-decoder"
[dev-dependencies.num-complex]
version = "0.4"
[dev-dependencies.quickcheck]
version = "1"
[features]
avif = ["avif-encoder"]
avif-decoder = [
"mp4parse",
"dcv-color-primitives",
"dav1d",
]
avif-encoder = [
"ravif",
"rgb",
]
benchmarks = []
bmp = []
dds = ["dxt"]
default = [
"gif",
"jpeg",
"ico",
"png",
"pnm",
"tga",
"tiff",
"webp",
"bmp",
"hdr",
"dxt",
"dds",
"farbfeld",
"jpeg_rayon",
"openexr",
"qoi",
]
dxt = []
farbfeld = []
hdr = []
ico = [
"bmp",
"png",
]
jpeg_rayon = ["jpeg/rayon"]
openexr = ["exr"]
pnm = []
qoi = ["dep:qoi"]
tga = []
webp = []
webp-encoder = ["libwebp"]

View File

@@ -0,0 +1,98 @@
cargo-features = ["public-dependency"]
[package]
name = "image"
version = "0.24.0-alpha"
edition = "2018"
rust-version = "1.56"
license = "MIT"
description = "Imaging library written in Rust. Provides basic filters and decoders for the most common image formats."
authors = ["The image-rs Developers"]
readme = "README.md"
# crates.io metadata
documentation = "https://docs.rs/image"
repository = "https://github.com/image-rs/image"
homepage = "https://github.com/image-rs/image"
categories = ["multimedia::images", "multimedia::encoding"]
# Crate build related
exclude = [
"src/png/testdata/*",
"examples/*",
"tests/*",
]
[lib]
name = "image"
path = "./src/lib.rs"
[dependencies]
bytemuck = { version = "1.7.0", features = ["extern_crate_alloc"] } # includes cast_vec
byteorder = "1.3.2"
num-iter = "0.1.32"
num-rational = { version = "0.4", default-features = false }
num-traits = { version = "0.2.0", public = true }
gif = { version = "0.11.1", optional = true }
jpeg = { package = "jpeg-decoder", version = "0.2.1", default-features = false, optional = true }
png = { version = "0.17.0", optional = true }
tiff = { version = "0.9.0", optional = true }
ravif = { version = "0.8.0", optional = true }
rgb = { version = "0.8.25", optional = true }
mp4parse = { version = "0.12.0", optional = true }
dav1d = { version = "0.6.0", optional = true }
dcv-color-primitives = { version = "0.4.0", optional = true }
exr = { version = "1.4.1", optional = true }
color_quant = { version = "1.1", public = true }
[dev-dependencies]
crc32fast = "1.2.0"
num-complex = "0.4"
glob = "0.3"
quickcheck = "1"
criterion = "0.3"
[features]
# TODO: Add "avif" to this list while preparing for 0.24.0
default = ["gif", "jpeg", "ico", "png", "pnm", "tga", "tiff", "webp", "bmp", "hdr", "dxt", "dds", "farbfeld", "jpeg_rayon", "openexr"]
ico = ["bmp", "png"]
pnm = []
tga = []
webp = []
bmp = []
hdr = []
dxt = []
dds = ["dxt"]
farbfeld = []
openexr = ["exr"]
# Enables multi-threading.
# Requires latest stable Rust.
jpeg_rayon = ["jpeg/rayon"]
# Non-default, enables avif support.
# Requires latest stable Rust.
avif = ["avif-encoder"]
# Requires latest stable Rust and recent nasm (>= 2.14).
avif-encoder = ["ravif", "rgb"]
# Non-default, even in `avif`. Requires stable Rust and native dependency libdav1d.
avif-decoder = ["mp4parse", "dcv-color-primitives", "dav1d"]
# Build some inline benchmarks. Useful only during development.
# Requires rustc nightly for feature test.
benchmarks = []
[[bench]]
path = "benches/decode.rs"
name = "decode"
harness = false
[[bench]]
path = "benches/encode.rs"
name = "encode"
harness = false
[[bench]]
name = "copy_from"
harness = false

21
vendor/image/LICENSE vendored Normal file
View File

@@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2014 PistonDevelopers
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

250
vendor/image/README.md vendored Normal file
View File

@@ -0,0 +1,250 @@
# Image
[![crates.io](https://img.shields.io/crates/v/image.svg)](https://crates.io/crates/image)
[![Documentation](https://docs.rs/image/badge.svg)](https://docs.rs/image)
[![Build Status](https://github.com/image-rs/image/workflows/Rust%20CI/badge.svg)](https://github.com/image-rs/image/actions)
[![Gitter](https://badges.gitter.im/image-rs/image.svg)](https://gitter.im/image-rs/image?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
Maintainers: [@HeroicKatora](https://github.com/HeroicKatora), [@fintelia](https://github.com/fintelia)
[How to contribute](https://github.com/image-rs/organization/blob/master/CONTRIBUTING.md)
## An Image Processing Library
This crate provides basic image processing functions and methods for converting to and from various image formats.
All image processing functions provided operate on types that implement the `GenericImageView` and `GenericImage` traits and return an `ImageBuffer`.
## Supported Image Formats
`image` provides implementations of common image format encoders and decoders.
<!--- NOTE: Make sure to keep this table in sync with the one in src/lib.rs -->
| Format | Decoding | Encoding |
| ------ | -------- | -------- |
| AVIF | Only 8-bit \*\* | Lossy |
| BMP | Yes | Rgb8, Rgba8, Gray8, GrayA8 |
| DDS | DXT1, DXT3, DXT5 | No |
| Farbfeld | Yes | Yes |
| GIF | Yes | Yes |
| ICO | Yes | Yes |
| JPEG | Baseline and progressive | Baseline JPEG |
| OpenEXR | Rgb32F, Rgba32F (no dwa compression) | Rgb32F, Rgba32F (no dwa compression) |
| PNG | All supported color types | Same as decoding |
| PNM | PBM, PGM, PPM, standard PAM | Yes |
| QOI | Yes | Yes |
| TGA | Yes | Rgb8, Rgba8, Bgr8, Bgra8, Gray8, GrayA8 |
| TIFF | Baseline(no fax support) + LZW + PackBits | Rgb8, Rgba8, Gray8 |
| WebP | Yes | Rgb8, Rgba8 \* |
- \* Requires the `webp-encoder` feature, uses the libwebp C library.
- \*\* Requires the `avif-decoder` feature, uses the libdav1d C library.
### The [`ImageDecoder`](https://docs.rs/image/*/image/trait.ImageDecoder.html) and [`ImageDecoderRect`](https://docs.rs/image/*/image/trait.ImageDecoderRect.html) Traits
All image format decoders implement the `ImageDecoder` trait which provide
basic methods for getting image metadata and decoding images. Some formats
additionally provide `ImageDecoderRect` implementations which allow for
decoding only part of an image at once.
The most important methods for decoders are...
+ **dimensions**: Return a tuple containing the width and height of the image.
+ **color_type**: Return the color type of the image data produced by this decoder.
+ **read_image**: Decode the entire image into a slice of bytes.
## Pixels
`image` provides the following pixel types:
+ **Rgb**: RGB pixel
+ **Rgba**: RGB with alpha (RGBA pixel)
+ **Luma**: Grayscale pixel
+ **LumaA**: Grayscale with alpha
All pixels are parameterised by their component type.
## Images
Individual pixels within images are indexed with (0,0) at the top left corner.
### The [`GenericImageView`](https://docs.rs/image/*/image/trait.GenericImageView.html) and [`GenericImage`](https://docs.rs/image/*/image/trait.GenericImage.html) Traits
Traits that provide methods for inspecting (`GenericImageView`) and manipulating (`GenericImage`) images, parameterised over the image's pixel type.
Some of these methods for `GenericImageView` are...
+ **dimensions**: Return a tuple containing the width and height of the image.
+ **get_pixel**: Returns the pixel located at (x, y).
+ **pixels**: Returns an Iterator over the pixels of this image.
While some of the methods for `GenericImage` are...
+ **put_pixel**: Put a pixel at location (x, y).
+ **copy_from**: Copies all of the pixels from another image into this image.
### Representation of Images
`image` provides two main ways of representing image data:
#### [`ImageBuffer`](https://docs.rs/image/*/image/struct.ImageBuffer.html)
An image parameterised by its Pixel types, represented by a width and height and a vector of pixels. It provides direct access to its pixels and implements the `GenericImageView` and `GenericImage` traits.
```rust
use image::{GenericImage, GenericImageView, ImageBuffer, RgbImage};
// Construct a new RGB ImageBuffer with the specified width and height.
let img: RgbImage = ImageBuffer::new(512, 512);
// Construct a new by repeated calls to the supplied closure.
let mut img = ImageBuffer::from_fn(512, 512, |x, y| {
if x % 2 == 0 {
image::Luma([0u8])
} else {
image::Luma([255u8])
}
});
// Obtain the image's width and height.
let (width, height) = img.dimensions();
// Access the pixel at coordinate (100, 100).
let pixel = img[(100, 100)];
// Or use the `get_pixel` method from the `GenericImage` trait.
let pixel = *img.get_pixel(100, 100);
// Put a pixel at coordinate (100, 100).
img.put_pixel(100, 100, pixel);
// Iterate over all pixels in the image.
for pixel in img.pixels() {
// Do something with pixel.
}
```
#### [`DynamicImage`](https://docs.rs/image/*/image/enum.DynamicImage.html)
A `DynamicImage` is an enumeration over all supported `ImageBuffer<P>` types.
Its exact image type is determined at runtime. It is the type returned when opening an image.
For convenience `DynamicImage` reimplements all image processing functions.
`DynamicImage` implement the `GenericImageView` and `GenericImage` traits for RGBA pixels.
#### [`SubImage`](https://docs.rs/image/*/image/struct.SubImage.html)
A view into another image, delimited by the coordinates of a rectangle.
The coordinates given set the position of the top left corner of the rectangle.
This is used to perform image processing functions on a subregion of an image.
```rust
use image::{GenericImageView, ImageBuffer, RgbImage, imageops};
let mut img: RgbImage = ImageBuffer::new(512, 512);
let subimg = imageops::crop(&mut img, 0, 0, 100, 100);
assert!(subimg.dimensions() == (100, 100));
```
## Image Processing Functions
These are the functions defined in the `imageops` module. All functions operate on types that implement the `GenericImage` trait.
Note that some of the functions are very slow in debug mode. Make sure to use release mode if you experience any performance issues.
+ **blur**: Performs a Gaussian blur on the supplied image.
+ **brighten**: Brighten the supplied image.
+ **huerotate**: Hue rotate the supplied image by degrees.
+ **contrast**: Adjust the contrast of the supplied image.
+ **crop**: Return a mutable view into an image.
+ **filter3x3**: Perform a 3x3 box filter on the supplied image.
+ **flip_horizontal**: Flip an image horizontally.
+ **flip_vertical**: Flip an image vertically.
+ **grayscale**: Convert the supplied image to grayscale.
+ **invert**: Invert each pixel within the supplied image This function operates in place.
+ **resize**: Resize the supplied image to the specified dimensions.
+ **rotate180**: Rotate an image 180 degrees clockwise.
+ **rotate270**: Rotate an image 270 degrees clockwise.
+ **rotate90**: Rotate an image 90 degrees clockwise.
+ **unsharpen**: Performs an unsharpen mask on the supplied image.
For more options, see the [`imageproc`](https://crates.io/crates/imageproc) crate.
## Examples
### Opening and Saving Images
`image` provides the `open` function for opening images from a path. The image
format is determined from the path's file extension. An `io` module provides a
reader which offer some more control.
```rust,no_run
use image::GenericImageView;
fn main() {
// Use the open function to load an image from a Path.
// `open` returns a `DynamicImage` on success.
let img = image::open("tests/images/jpg/progressive/cat.jpg").unwrap();
// The dimensions method returns the images width and height.
println!("dimensions {:?}", img.dimensions());
// The color method returns the image's `ColorType`.
println!("{:?}", img.color());
// Write the contents of this image to the Writer in PNG format.
img.save("test.png").unwrap();
}
```
### Generating Fractals
```rust,no_run
//! An example of generating julia fractals.
fn main() {
let imgx = 800;
let imgy = 800;
let scalex = 3.0 / imgx as f32;
let scaley = 3.0 / imgy as f32;
// Create a new ImgBuf with width: imgx and height: imgy
let mut imgbuf = image::ImageBuffer::new(imgx, imgy);
// Iterate over the coordinates and pixels of the image
for (x, y, pixel) in imgbuf.enumerate_pixels_mut() {
let r = (0.3 * x as f32) as u8;
let b = (0.3 * y as f32) as u8;
*pixel = image::Rgb([r, 0, b]);
}
// A redundant loop to demonstrate reading image data
for x in 0..imgx {
for y in 0..imgy {
let cx = y as f32 * scalex - 1.5;
let cy = x as f32 * scaley - 1.5;
let c = num_complex::Complex::new(-0.4, 0.6);
let mut z = num_complex::Complex::new(cx, cy);
let mut i = 0;
while i < 255 && z.norm() <= 2.0 {
z = z * z + c;
i += 1;
}
let pixel = imgbuf.get_pixel_mut(x, y);
let image::Rgb(data) = *pixel;
*pixel = image::Rgb([data[0], i as u8, data[2]]);
}
}
// Save the image as “fractal.png”, the format is deduced from the path
imgbuf.save("fractal.png").unwrap();
}
```
Example output:
<img src="examples/fractal.png" alt="A Julia Fractal, c: -0.4 + 0.6i" width="500" />
### Writing raw buffers
If the high level interface is not needed because the image was obtained by other means, `image` provides the function `save_buffer` to save a buffer to a file.
```rust,no_run
fn main() {
let buffer: &[u8] = unimplemented!(); // Generate the image data
// Save the buffer as "image.png"
image::save_buffer("image.png", buffer, 800, 600, image::ColorType::Rgb8).unwrap()
}
```

6
vendor/image/benches/README.md vendored Normal file
View File

@@ -0,0 +1,6 @@
# Getting started with benchmarking
To run the benchmarks you need a nightly rust toolchain.
Then you launch it with
cargo +nightly bench --features=benchmarks

14
vendor/image/benches/copy_from.rs vendored Normal file
View File

@@ -0,0 +1,14 @@
use criterion::{black_box, criterion_group, criterion_main, Criterion};
use image::{GenericImage, ImageBuffer, Rgba};
pub fn bench_copy_from(c: &mut Criterion) {
let src = ImageBuffer::from_pixel(2048, 2048, Rgba([255u8, 0, 0, 255]));
let mut dst = ImageBuffer::from_pixel(2048, 2048, Rgba([0u8, 0, 0, 255]));
c.bench_function("copy_from", |b| {
b.iter(|| dst.copy_from(black_box(&src), 0, 0))
});
}
criterion_group!(benches, bench_copy_from);
criterion_main!(benches);

109
vendor/image/benches/decode.rs vendored Normal file
View File

@@ -0,0 +1,109 @@
use std::{fs, iter, path};
use criterion::{criterion_group, criterion_main, Criterion};
use image::ImageFormat;
#[derive(Clone, Copy)]
struct BenchDef {
dir: &'static [&'static str],
files: &'static [&'static str],
format: ImageFormat,
}
fn load_all(c: &mut Criterion) {
const BENCH_DEFS: &'static [BenchDef] = &[
BenchDef {
dir: &["bmp", "images"],
files: &[
"Core_1_Bit.bmp",
"Core_4_Bit.bmp",
"Core_8_Bit.bmp",
"rgb16.bmp",
"rgb24.bmp",
"rgb32.bmp",
"pal4rle.bmp",
"pal8rle.bmp",
"rgb16-565.bmp",
"rgb32bf.bmp",
],
format: ImageFormat::Bmp,
},
BenchDef {
dir: &["gif", "simple"],
files: &["alpha_gif_a.gif", "sample_1.gif"],
format: ImageFormat::Gif,
},
BenchDef {
dir: &["hdr", "images"],
files: &["image1.hdr", "rgbr4x4.hdr"],
format: ImageFormat::Hdr,
},
BenchDef {
dir: &["ico", "images"],
files: &[
"bmp-24bpp-mask.ico",
"bmp-32bpp-alpha.ico",
"png-32bpp-alpha.ico",
"smile.ico",
],
format: ImageFormat::Ico,
},
BenchDef {
dir: &["jpg", "progressive"],
files: &["3.jpg", "cat.jpg", "test.jpg"],
format: ImageFormat::Jpeg,
},
// TODO: pnm
// TODO: png
BenchDef {
dir: &["tga", "testsuite"],
files: &["cbw8.tga", "ctc24.tga", "ubw8.tga", "utc24.tga"],
format: ImageFormat::Tga,
},
BenchDef {
dir: &["tiff", "testsuite"],
files: &[
"hpredict.tiff",
"hpredict_packbits.tiff",
"mandrill.tiff",
"rgb-3c-16b.tiff",
],
format: ImageFormat::Tiff,
},
BenchDef {
dir: &["webp", "images"],
files: &[
"simple-gray.webp",
"simple-rgb.webp",
"vp8x-gray.webp",
"vp8x-rgb.webp",
],
format: ImageFormat::WebP,
},
];
for bench in BENCH_DEFS {
bench_load(c, bench);
}
}
criterion_group!(benches, load_all);
criterion_main!(benches);
fn bench_load(c: &mut Criterion, def: &BenchDef) {
let group_name = format!("load-{:?}", def.format);
let mut group = c.benchmark_group(&group_name);
let paths = IMAGE_DIR.iter().chain(def.dir);
for file_name in def.files {
let path: path::PathBuf = paths.clone().chain(iter::once(file_name)).collect();
let buf = fs::read(path).unwrap();
group.bench_function(file_name.to_owned(), |b| {
b.iter(|| {
image::load_from_memory_with_format(&buf, def.format).unwrap();
})
});
}
}
const IMAGE_DIR: [&'static str; 3] = [".", "tests", "images"];

134
vendor/image/benches/encode.rs vendored Normal file
View File

@@ -0,0 +1,134 @@
extern crate criterion;
use criterion::{criterion_group, criterion_main, BenchmarkId, Criterion};
use image::{codecs::bmp::BmpEncoder, codecs::jpeg::JpegEncoder, ColorType};
use std::fs::File;
use std::io::{BufWriter, Seek, SeekFrom, Write};
trait Encoder {
fn encode_raw(&self, into: &mut Vec<u8>, im: &[u8], dims: u32, color: ColorType);
fn encode_bufvec(&self, into: &mut Vec<u8>, im: &[u8], dims: u32, color: ColorType);
fn encode_file(&self, file: &File, im: &[u8], dims: u32, color: ColorType);
}
#[derive(Clone, Copy)]
struct BenchDef {
with: &'static dyn Encoder,
name: &'static str,
sizes: &'static [u32],
colors: &'static [ColorType],
}
fn encode_all(c: &mut Criterion) {
const BENCH_DEFS: &'static [BenchDef] = &[
BenchDef {
with: &Bmp,
name: "bmp",
sizes: &[100u32, 200, 400],
colors: &[ColorType::L8, ColorType::Rgb8, ColorType::Rgba8],
},
BenchDef {
with: &Jpeg,
name: "jpeg",
sizes: &[64u32, 128, 256],
colors: &[ColorType::L8, ColorType::Rgb8, ColorType::Rgba8],
},
];
for definition in BENCH_DEFS {
encode_definition(c, definition)
}
}
criterion_group!(benches, encode_all);
criterion_main!(benches);
type BenchGroup<'a> = criterion::BenchmarkGroup<'a, criterion::measurement::WallTime>;
/// Benchmarks encoding a zeroed image.
///
/// For compressed formats this is surely not representative of encoding a normal image but it's a
/// start for benchmarking.
fn encode_zeroed(group: &mut BenchGroup, with: &dyn Encoder, size: u32, color: ColorType) {
let bytes = size as usize * usize::from(color.bytes_per_pixel());
let im = vec![0; bytes * bytes];
group.bench_with_input(
BenchmarkId::new(format!("zero-{:?}-rawvec", color), size),
&im,
|b, image| {
let mut v = vec![];
with.encode_raw(&mut v, &im, size, color);
b.iter(|| with.encode_raw(&mut v, image, size, color));
},
);
group.bench_with_input(
BenchmarkId::new(format!("zero-{:?}-bufvec", color), size),
&im,
|b, image| {
let mut v = vec![];
with.encode_raw(&mut v, &im, size, color);
b.iter(|| with.encode_bufvec(&mut v, image, size, color));
},
);
group.bench_with_input(
BenchmarkId::new(format!("zero-{:?}-file", color), size),
&im,
|b, image| {
let file = File::create("temp.bmp").unwrap();
b.iter(|| with.encode_file(&file, image, size, color));
},
);
}
fn encode_definition(criterion: &mut Criterion, def: &BenchDef) {
let mut group = criterion.benchmark_group(format!("encode-{}", def.name));
for &color in def.colors {
for &size in def.sizes {
encode_zeroed(&mut group, def.with, size, color);
}
}
}
struct Bmp;
struct Jpeg;
trait EncoderBase {
fn encode(&self, into: impl Write, im: &[u8], dims: u32, color: ColorType);
}
impl<T: EncoderBase> Encoder for T {
fn encode_raw(&self, into: &mut Vec<u8>, im: &[u8], dims: u32, color: ColorType) {
into.clear();
self.encode(into, im, dims, color);
}
fn encode_bufvec(&self, into: &mut Vec<u8>, im: &[u8], dims: u32, color: ColorType) {
into.clear();
let buf = BufWriter::new(into);
self.encode(buf, im, dims, color);
}
fn encode_file(&self, mut file: &File, im: &[u8], dims: u32, color: ColorType) {
file.seek(SeekFrom::Start(0)).unwrap();
let buf = BufWriter::new(file);
self.encode(buf, im, dims, color);
}
}
impl EncoderBase for Bmp {
fn encode(&self, mut into: impl Write, im: &[u8], size: u32, color: ColorType) {
let mut x = BmpEncoder::new(&mut into);
x.encode(im, size, size, color).unwrap();
}
}
impl EncoderBase for Jpeg {
fn encode(&self, mut into: impl Write, im: &[u8], size: u32, color: ColorType) {
let mut x = JpegEncoder::new(&mut into);
x.encode(im, size, size, color).unwrap();
}
}

38
vendor/image/deny.toml vendored Normal file
View File

@@ -0,0 +1,38 @@
# https://embarkstudios.github.io/cargo-deny/
targets = [
{ triple = "aarch64-apple-darwin" },
{ triple = "aarch64-linux-android" },
{ triple = "x86_64-apple-darwin" },
{ triple = "x86_64-pc-windows-msvc" },
{ triple = "x86_64-unknown-linux-gnu" },
{ triple = "x86_64-unknown-linux-musl" },
]
[advisories]
vulnerability = "deny"
unmaintained = "warn"
yanked = "deny"
ignore = []
[bans]
multiple-versions = "deny"
wildcards = "allow" # at least until https://github.com/EmbarkStudios/cargo-deny/issues/241 is fixed
deny = []
skip = [
{ name = "num-derive" } # ravif transatively depends on 0.3 and 0.4.
]
skip-tree = [
{ name = "criterion" }, # dev-dependency
{ name = "quickcheck" }, # dev-dependency
{ name = "dav1d" }, # TODO: needs upgrade
{ name = "clap" },
]
[licenses]
unlicensed = "allow"
allow-osi-fsf-free = "either"
copyleft = "allow"

View File

@@ -0,0 +1,54 @@
# Advisory about potential memory unsafety issues
[While reviewing][i885] some `unsafe Vec::from_raw_parts` operations within the
library, trying to justify their existence with stronger reasoning, we noticed
that they instead did not meet the required conditions set by the standard
library. This unsoundness was quickly removed, but we noted that the same
unjustified reasoning had been applied by a dependency introduced in `0.21`.
For efficiency reasons, we had tried to reuse the allocations made by decoders
for the buffer of the final image. However, that process is error prone. Most
image decoding algorithms change the representation type of color samples to
some degree. Notably, the output pixel type may have a different size and
alignment than the type used in the temporary decoding buffer. In this specific
instance, the `ImageBuffer` of the output expects a linear arrangement of `u8`
samples while the implementation of the `hdr` decoder uses a pixel
representation of `Rgb<u8>`, which has three times the size. One of the
requirements of `Vec::from_raw_parts` reads:
> ptr's T needs to have the same size and alignment as it was allocated with.
This requirement is not present on slices `[T]`, as it is motivated by the
allocator interface. The validity invariant of a reference and slice only
requires the correct alignment here, which was considered in the design of
`Rgb<_>` by giving it a well-defined representation, `#[repr(C)]`. But
critically, this does not guarantee that we can reuse the existing allocation
through effectively transmuting a `Vec<_>`!
The actual impact of this issue is, in real world implementations, limited to
allocators which handle allocations for types of size `1` and `3`/`4`
differently. To the best of my knowledge, this does not apply to `jemalloc` and
the `libc` allocator. However, we decided to proceed with caution here.
## Lessons for the future
New library dependencies will be under a stricter policy. Not only would they
need to be justified by functionality but also require at least some level of
reasoning how they solve that problem better than alternatives. Some appearance
of maintenance, or the existence of `#[deny(unsafe)]`, will help. We'll
additionally look into existing dependencies trying to identify similar issues
and minimizing the potential surface for implementation risks.
## Sound and safe buffer reuse
It seems that the `Vec` representation is entirely unfit for buffer reuse in
the style which an image library requires. In particular, using pixel types of
different sizes is likely common to handle either whole (encoded) pixels or
individual samples. Thus, we started a new sub-project to address this use
case, [image-canvas][image-canvas]. Contributions and review of its safety are
very welcome, we ask for the communities help here. The release of `v0.1` will
not occur until at least one such review has occurred.
[i885]: https://github.com/image-rs/image/pull/885
[image-canvas]: https://github.com/image-rs/canvas

24
vendor/image/release.sh vendored Executable file
View File

@@ -0,0 +1,24 @@
#!/bin/bash
# Checks automatic preconditions for a release
determine_new_version() {
grep "version = " Cargo.toml | sed -Ee 's/version = "(.*)"/\1/' | head -1
}
check_notexists_version() {
# Does the api information start with: '{"errors":'
[[ $(wget "https://crates.io/api/v1/crates/image/$1" -qO -) == "{\"errors\":"* ]]
}
check_release_description() {
major=${1%%.*}
minor_patch=${1#$major.}
minor=${minor_patch%%.*}
patch=${minor_patch#$minor.}
# We just need to find a fitting header line
grep -Eq "^### Version ${major}.${minor}$" CHANGES.md
}
version="$(determine_new_version)"
check_release_description $version || { echo "Version does not have a release description"; exit 1; }
check_notexists_version $version || { echo "Version $version appears already published"; exit 1; }

342
vendor/image/src/animation.rs vendored Normal file
View File

@@ -0,0 +1,342 @@
use std::iter::Iterator;
use std::time::Duration;
use num_rational::Ratio;
use crate::error::ImageResult;
use crate::RgbaImage;
/// An implementation dependent iterator, reading the frames as requested
pub struct Frames<'a> {
iterator: Box<dyn Iterator<Item = ImageResult<Frame>> + 'a>,
}
impl<'a> Frames<'a> {
/// Creates a new `Frames` from an implementation specific iterator.
pub fn new(iterator: Box<dyn Iterator<Item = ImageResult<Frame>> + 'a>) -> Self {
Frames { iterator }
}
/// Steps through the iterator from the current frame until the end and pushes each frame into
/// a `Vec`.
/// If en error is encountered that error is returned instead.
///
/// Note: This is equivalent to `Frames::collect::<ImageResult<Vec<Frame>>>()`
pub fn collect_frames(self) -> ImageResult<Vec<Frame>> {
self.collect()
}
}
impl<'a> Iterator for Frames<'a> {
type Item = ImageResult<Frame>;
fn next(&mut self) -> Option<ImageResult<Frame>> {
self.iterator.next()
}
}
/// A single animation frame
#[derive(Clone)]
pub struct Frame {
/// Delay between the frames in milliseconds
delay: Delay,
/// x offset
left: u32,
/// y offset
top: u32,
buffer: RgbaImage,
}
/// The delay of a frame relative to the previous one.
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd)]
pub struct Delay {
ratio: Ratio<u32>,
}
impl Frame {
/// Constructs a new frame without any delay.
pub fn new(buffer: RgbaImage) -> Frame {
Frame {
delay: Delay::from_ratio(Ratio::from_integer(0)),
left: 0,
top: 0,
buffer,
}
}
/// Constructs a new frame
pub fn from_parts(buffer: RgbaImage, left: u32, top: u32, delay: Delay) -> Frame {
Frame {
delay,
left,
top,
buffer,
}
}
/// Delay of this frame
pub fn delay(&self) -> Delay {
self.delay
}
/// Returns the image buffer
pub fn buffer(&self) -> &RgbaImage {
&self.buffer
}
/// Returns a mutable image buffer
pub fn buffer_mut(&mut self) -> &mut RgbaImage {
&mut self.buffer
}
/// Returns the image buffer
pub fn into_buffer(self) -> RgbaImage {
self.buffer
}
/// Returns the x offset
pub fn left(&self) -> u32 {
self.left
}
/// Returns the y offset
pub fn top(&self) -> u32 {
self.top
}
}
impl Delay {
/// Create a delay from a ratio of milliseconds.
///
/// # Examples
///
/// ```
/// use image::Delay;
/// let delay_10ms = Delay::from_numer_denom_ms(10, 1);
/// ```
pub fn from_numer_denom_ms(numerator: u32, denominator: u32) -> Self {
Delay {
ratio: Ratio::new_raw(numerator, denominator),
}
}
/// Convert from a duration, clamped between 0 and an implemented defined maximum.
///
/// The maximum is *at least* `i32::MAX` milliseconds. It should be noted that the accuracy of
/// the result may be relative and very large delays have a coarse resolution.
///
/// # Examples
///
/// ```
/// use std::time::Duration;
/// use image::Delay;
///
/// let duration = Duration::from_millis(20);
/// let delay = Delay::from_saturating_duration(duration);
/// ```
pub fn from_saturating_duration(duration: Duration) -> Self {
// A few notes: The largest number we can represent as a ratio is u32::MAX but we can
// sometimes represent much smaller numbers.
//
// We can represent duration as `millis+a/b` (where a < b, b > 0).
// We must thus bound b with `b·millis + (b-1) <= u32::MAX` or
// > `0 < b <= (u32::MAX + 1)/(millis + 1)`
// Corollary: millis <= u32::MAX
const MILLIS_BOUND: u128 = u32::max_value() as u128;
let millis = duration.as_millis().min(MILLIS_BOUND);
let submillis = (duration.as_nanos() % 1_000_000) as u32;
let max_b = if millis > 0 {
((MILLIS_BOUND + 1) / (millis + 1)) as u32
} else {
MILLIS_BOUND as u32
};
let millis = millis as u32;
let (a, b) = Self::closest_bounded_fraction(max_b, submillis, 1_000_000);
Self::from_numer_denom_ms(a + b * millis, b)
}
/// The numerator and denominator of the delay in milliseconds.
///
/// This is guaranteed to be an exact conversion if the `Delay` was previously created with the
/// `from_numer_denom_ms` constructor.
pub fn numer_denom_ms(self) -> (u32, u32) {
(*self.ratio.numer(), *self.ratio.denom())
}
pub(crate) fn from_ratio(ratio: Ratio<u32>) -> Self {
Delay { ratio }
}
pub(crate) fn into_ratio(self) -> Ratio<u32> {
self.ratio
}
/// Given some fraction, compute an approximation with denominator bounded.
///
/// Note that `denom_bound` bounds nominator and denominator of all intermediate
/// approximations and the end result.
fn closest_bounded_fraction(denom_bound: u32, nom: u32, denom: u32) -> (u32, u32) {
use std::cmp::Ordering::{self, *};
assert!(0 < denom);
assert!(0 < denom_bound);
assert!(nom < denom);
// Avoid a few type troubles. All intermediate results are bounded by `denom_bound` which
// is in turn bounded by u32::MAX. Representing with u64 allows multiplication of any two
// values without fears of overflow.
// Compare two fractions whose parts fit into a u32.
fn compare_fraction((an, ad): (u64, u64), (bn, bd): (u64, u64)) -> Ordering {
(an * bd).cmp(&(bn * ad))
}
// Computes the nominator of the absolute difference between two such fractions.
fn abs_diff_nom((an, ad): (u64, u64), (bn, bd): (u64, u64)) -> u64 {
let c0 = an * bd;
let c1 = ad * bn;
let d0 = c0.max(c1);
let d1 = c0.min(c1);
d0 - d1
}
let exact = (u64::from(nom), u64::from(denom));
// The lower bound fraction, numerator and denominator.
let mut lower = (0u64, 1u64);
// The upper bound fraction, numerator and denominator.
let mut upper = (1u64, 1u64);
// The closest approximation for now.
let mut guess = (u64::from(nom * 2 > denom), 1u64);
// loop invariant: ad, bd <= denom_bound
// iterates the Farey sequence.
loop {
// Break if we are done.
if compare_fraction(guess, exact) == Equal {
break;
}
// Break if next Farey number is out-of-range.
if u64::from(denom_bound) - lower.1 < upper.1 {
break;
}
// Next Farey approximation n between a and b
let next = (lower.0 + upper.0, lower.1 + upper.1);
// if F < n then replace the upper bound, else replace lower.
if compare_fraction(exact, next) == Less {
upper = next;
} else {
lower = next;
}
// Now correct the closest guess.
// In other words, if |c - f| > |n - f| then replace it with the new guess.
// This favors the guess with smaller denominator on equality.
// |g - f| = |g_diff_nom|/(gd*fd);
let g_diff_nom = abs_diff_nom(guess, exact);
// |n - f| = |n_diff_nom|/(nd*fd);
let n_diff_nom = abs_diff_nom(next, exact);
// The difference |n - f| is smaller than |g - f| if either the integral part of the
// fraction |n_diff_nom|/nd is smaller than the one of |g_diff_nom|/gd or if they are
// the same but the fractional part is larger.
if match (n_diff_nom / next.1).cmp(&(g_diff_nom / guess.1)) {
Less => true,
Greater => false,
// Note that the nominator for the fractional part is smaller than its denominator
// which is smaller than u32 and can't overflow the multiplication with the other
// denominator, that is we can compare these fractions by multiplication with the
// respective other denominator.
Equal => {
compare_fraction(
(n_diff_nom % next.1, next.1),
(g_diff_nom % guess.1, guess.1),
) == Less
}
} {
guess = next;
}
}
(guess.0 as u32, guess.1 as u32)
}
}
impl From<Delay> for Duration {
fn from(delay: Delay) -> Self {
let ratio = delay.into_ratio();
let ms = ratio.to_integer();
let rest = ratio.numer() % ratio.denom();
let nanos = (u64::from(rest) * 1_000_000) / u64::from(*ratio.denom());
Duration::from_millis(ms.into()) + Duration::from_nanos(nanos)
}
}
#[cfg(test)]
mod tests {
use super::{Delay, Duration, Ratio};
#[test]
fn simple() {
let second = Delay::from_numer_denom_ms(1000, 1);
assert_eq!(Duration::from(second), Duration::from_secs(1));
}
#[test]
fn fps_30() {
let thirtieth = Delay::from_numer_denom_ms(1000, 30);
let duration = Duration::from(thirtieth);
assert_eq!(duration.as_secs(), 0);
assert_eq!(duration.subsec_millis(), 33);
assert_eq!(duration.subsec_nanos(), 33_333_333);
}
#[test]
fn duration_outlier() {
let oob = Duration::from_secs(0xFFFF_FFFF);
let delay = Delay::from_saturating_duration(oob);
assert_eq!(delay.numer_denom_ms(), (0xFFFF_FFFF, 1));
}
#[test]
fn duration_approx() {
let oob = Duration::from_millis(0xFFFF_FFFF) + Duration::from_micros(1);
let delay = Delay::from_saturating_duration(oob);
assert_eq!(delay.numer_denom_ms(), (0xFFFF_FFFF, 1));
let inbounds = Duration::from_millis(0xFFFF_FFFF) - Duration::from_micros(1);
let delay = Delay::from_saturating_duration(inbounds);
assert_eq!(delay.numer_denom_ms(), (0xFFFF_FFFF, 1));
let fine =
Duration::from_millis(0xFFFF_FFFF / 1000) + Duration::from_micros(0xFFFF_FFFF % 1000);
let delay = Delay::from_saturating_duration(fine);
// Funnily, 0xFFFF_FFFF is divisble by 5, thus we compare with a `Ratio`.
assert_eq!(delay.into_ratio(), Ratio::new(0xFFFF_FFFF, 1000));
}
#[test]
fn precise() {
// The ratio has only 32 bits in the numerator, too imprecise to get more than 11 digits
// correct. But it may be expressed as 1_000_000/3 instead.
let exceed = Duration::from_secs(333) + Duration::from_nanos(333_333_333);
let delay = Delay::from_saturating_duration(exceed);
assert_eq!(Duration::from(delay), exceed);
}
#[test]
fn small() {
// Not quite a delay of `1 ms`.
let delay = Delay::from_numer_denom_ms(1 << 16, (1 << 16) + 1);
let duration = Duration::from(delay);
assert_eq!(duration.as_millis(), 0);
// Not precisely the original but should be smaller than 0.
let delay = Delay::from_saturating_duration(duration);
assert_eq!(delay.into_ratio().to_integer(), 0);
}
}

1768
vendor/image/src/buffer.rs vendored Normal file

File diff suppressed because it is too large Load Diff

177
vendor/image/src/codecs/avif/decoder.rs vendored Normal file
View File

@@ -0,0 +1,177 @@
//! Decoding of AVIF images.
///
/// The [AVIF] specification defines an image derivative of the AV1 bitstream, an open video codec.
///
/// [AVIF]: https://aomediacodec.github.io/av1-avif/
use std::convert::TryFrom;
use std::error::Error;
use std::io::{self, Cursor, Read};
use std::marker::PhantomData;
use std::mem;
use crate::error::DecodingError;
use crate::{ColorType, ImageDecoder, ImageError, ImageFormat, ImageResult};
use dav1d::{PixelLayout, PlanarImageComponent};
use dcv_color_primitives as dcp;
use mp4parse::{read_avif, ParseStrictness};
fn error_map<E: Into<Box<dyn Error + Send + Sync>>>(err: E) -> ImageError {
ImageError::Decoding(DecodingError::new(ImageFormat::Avif.into(), err))
}
/// AVIF Decoder.
///
/// Reads one image into the chosen input.
pub struct AvifDecoder<R> {
inner: PhantomData<R>,
picture: dav1d::Picture,
alpha_picture: Option<dav1d::Picture>,
icc_profile: Option<Vec<u8>>,
}
impl<R: Read> AvifDecoder<R> {
/// Create a new decoder that reads its input from `r`.
pub fn new(mut r: R) -> ImageResult<Self> {
let ctx = read_avif(&mut r, ParseStrictness::Normal).map_err(error_map)?;
let coded = ctx.primary_item_coded_data().unwrap_or_default();
let mut primary_decoder = dav1d::Decoder::new();
primary_decoder
.send_data(coded, None, None, None)
.map_err(error_map)?;
let picture = primary_decoder.get_picture().map_err(error_map)?;
let alpha_item = ctx.alpha_item_coded_data().unwrap_or_default();
let alpha_picture = if !alpha_item.is_empty() {
let mut alpha_decoder = dav1d::Decoder::new();
alpha_decoder
.send_data(alpha_item, None, None, None)
.map_err(error_map)?;
Some(alpha_decoder.get_picture().map_err(error_map)?)
} else {
None
};
let icc_profile = ctx
.icc_colour_information()
.map(|x| x.ok().unwrap_or_default())
.map(|x| x.to_vec());
assert_eq!(picture.bit_depth(), 8);
Ok(AvifDecoder {
inner: PhantomData,
picture,
alpha_picture,
icc_profile,
})
}
}
/// Wrapper struct around a `Cursor<Vec<u8>>`
pub struct AvifReader<R>(Cursor<Vec<u8>>, PhantomData<R>);
impl<R> Read for AvifReader<R> {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
self.0.read(buf)
}
fn read_to_end(&mut self, buf: &mut Vec<u8>) -> io::Result<usize> {
if self.0.position() == 0 && buf.is_empty() {
mem::swap(buf, self.0.get_mut());
Ok(buf.len())
} else {
self.0.read_to_end(buf)
}
}
}
impl<'a, R: 'a + Read> ImageDecoder<'a> for AvifDecoder<R> {
type Reader = AvifReader<R>;
fn dimensions(&self) -> (u32, u32) {
(self.picture.width(), self.picture.height())
}
fn color_type(&self) -> ColorType {
ColorType::Rgba8
}
fn icc_profile(&mut self) -> Option<Vec<u8>> {
self.icc_profile.clone()
}
fn into_reader(self) -> ImageResult<Self::Reader> {
let plane = self.picture.plane(PlanarImageComponent::Y);
Ok(AvifReader(
Cursor::new(plane.as_ref().to_vec()),
PhantomData,
))
}
fn read_image(self, buf: &mut [u8]) -> ImageResult<()> {
assert_eq!(u64::try_from(buf.len()), Ok(self.total_bytes()));
dcp::initialize();
if self.picture.pixel_layout() != PixelLayout::I400 {
let pixel_format = match self.picture.pixel_layout() {
PixelLayout::I400 => todo!(),
PixelLayout::I420 => dcp::PixelFormat::I420,
PixelLayout::I422 => dcp::PixelFormat::I422,
PixelLayout::I444 => dcp::PixelFormat::I444,
PixelLayout::Unknown => panic!("Unknown pixel layout"),
};
let src_format = dcp::ImageFormat {
pixel_format,
color_space: dcp::ColorSpace::Bt601,
num_planes: 3,
};
let dst_format = dcp::ImageFormat {
pixel_format: dcp::PixelFormat::Rgba,
color_space: dcp::ColorSpace::Lrgb,
num_planes: 1,
};
let (width, height) = self.dimensions();
let planes = &[
self.picture.plane(PlanarImageComponent::Y),
self.picture.plane(PlanarImageComponent::U),
self.picture.plane(PlanarImageComponent::V),
];
let src_buffers = planes.iter().map(AsRef::as_ref).collect::<Vec<_>>();
let strides = &[
self.picture.stride(PlanarImageComponent::Y) as usize,
self.picture.stride(PlanarImageComponent::U) as usize,
self.picture.stride(PlanarImageComponent::V) as usize,
];
let dst_buffers = &mut [&mut buf[..]];
dcp::convert_image(
width,
height,
&src_format,
Some(strides),
&src_buffers,
&dst_format,
None,
dst_buffers,
)
.map_err(error_map)?;
} else {
let plane = self.picture.plane(PlanarImageComponent::Y);
buf.copy_from_slice(plane.as_ref());
}
if let Some(picture) = self.alpha_picture {
assert_eq!(picture.pixel_layout(), PixelLayout::I400);
let stride = picture.stride(PlanarImageComponent::Y) as usize;
let plane = picture.plane(PlanarImageComponent::Y);
let width = picture.width();
for (buf, slice) in Iterator::zip(
buf.chunks_exact_mut(width as usize * 4),
plane.as_ref().chunks_exact(stride),
) {
for i in 0..width as usize {
buf[3 + i * 4] = slice[i];
}
}
}
Ok(())
}
}

274
vendor/image/src/codecs/avif/encoder.rs vendored Normal file
View File

@@ -0,0 +1,274 @@
//! Encoding of AVIF images.
///
/// The [AVIF] specification defines an image derivative of the AV1 bitstream, an open video codec.
///
/// [AVIF]: https://aomediacodec.github.io/av1-avif/
use std::borrow::Cow;
use std::cmp::min;
use std::io::Write;
use crate::buffer::ConvertBuffer;
use crate::color::{FromColor, Luma, LumaA, Rgb, Rgba};
use crate::error::{
EncodingError, ParameterError, ParameterErrorKind, UnsupportedError, UnsupportedErrorKind,
};
use crate::{ColorType, ImageBuffer, ImageEncoder, ImageFormat, Pixel};
use crate::{ImageError, ImageResult};
use bytemuck::{try_cast_slice, try_cast_slice_mut, Pod, PodCastError};
use num_traits::Zero;
use ravif::{Encoder, Img, RGB8, RGBA8};
use rgb::AsPixels;
/// AVIF Encoder.
///
/// Writes one image into the chosen output.
pub struct AvifEncoder<W> {
inner: W,
encoder: Encoder,
}
/// An enumeration over supported AVIF color spaces
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
#[non_exhaustive]
pub enum ColorSpace {
/// sRGB colorspace
Srgb,
/// BT.709 colorspace
Bt709,
}
impl ColorSpace {
fn to_ravif(self) -> ravif::ColorSpace {
match self {
Self::Srgb => ravif::ColorSpace::RGB,
Self::Bt709 => ravif::ColorSpace::YCbCr,
}
}
}
enum RgbColor<'buf> {
Rgb8(Img<&'buf [RGB8]>),
Rgba8(Img<&'buf [RGBA8]>),
}
impl<W: Write> AvifEncoder<W> {
/// Create a new encoder that writes its output to `w`.
pub fn new(w: W) -> Self {
AvifEncoder::new_with_speed_quality(w, 4, 80) // `cavif` uses these defaults
}
/// Create a new encoder with specified speed and quality, that writes its output to `w`.
/// `speed` accepts a value in the range 0-10, where 0 is the slowest and 10 is the fastest.
/// `quality` accepts a value in the range 0-100, where 0 is the worst and 100 is the best.
pub fn new_with_speed_quality(w: W, speed: u8, quality: u8) -> Self {
// Clamp quality and speed to range
let quality = min(quality, 100);
let speed = min(speed, 10);
let encoder = Encoder::new()
.with_quality(f32::from(quality))
.with_alpha_quality(f32::from(quality))
.with_speed(speed);
AvifEncoder { inner: w, encoder }
}
/// Encode with the specified `color_space`.
pub fn with_colorspace(mut self, color_space: ColorSpace) -> Self {
self.encoder = self
.encoder
.with_internal_color_space(color_space.to_ravif());
self
}
/// Configures `rayon` thread pool size.
/// The default `None` is to use all threads in the default `rayon` thread pool.
pub fn with_num_threads(mut self, num_threads: Option<usize>) -> Self {
self.encoder = self.encoder.with_num_threads(num_threads);
self
}
}
impl<W: Write> ImageEncoder for AvifEncoder<W> {
/// Encode image data with the indicated color type.
///
/// The encoder currently requires all data to be RGBA8, it will be converted internally if
/// necessary. When data is suitably aligned, i.e. u16 channels to two bytes, then the
/// conversion may be more efficient.
fn write_image(
mut self,
data: &[u8],
width: u32,
height: u32,
color: ColorType,
) -> ImageResult<()> {
self.set_color(color);
// `ravif` needs strongly typed data so let's convert. We can either use a temporarily
// owned version in our own buffer or zero-copy if possible by using the input buffer.
// This requires going through `rgb`.
let mut fallback = vec![]; // This vector is used if we need to do a color conversion.
let result = match Self::encode_as_img(&mut fallback, data, width, height, color)? {
RgbColor::Rgb8(buffer) => self.encoder.encode_rgb(buffer),
RgbColor::Rgba8(buffer) => self.encoder.encode_rgba(buffer),
};
let data = result.map_err(|err| {
ImageError::Encoding(EncodingError::new(ImageFormat::Avif.into(), err))
})?;
self.inner.write_all(&data.avif_file)?;
Ok(())
}
}
impl<W: Write> AvifEncoder<W> {
// Does not currently do anything. Mirrors behaviour of old config function.
fn set_color(&mut self, _color: ColorType) {
// self.config.color_space = ColorSpace::RGB;
}
fn encode_as_img<'buf>(
fallback: &'buf mut Vec<u8>,
data: &'buf [u8],
width: u32,
height: u32,
color: ColorType,
) -> ImageResult<RgbColor<'buf>> {
// Error wrapping utility for color dependent buffer dimensions.
fn try_from_raw<P: Pixel + 'static>(
data: &[P::Subpixel],
width: u32,
height: u32,
) -> ImageResult<ImageBuffer<P, &[P::Subpixel]>> {
ImageBuffer::from_raw(width, height, data).ok_or_else(|| {
ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::DimensionMismatch,
))
})
}
// Convert to target color type using few buffer allocations.
fn convert_into<'buf, P>(
buf: &'buf mut Vec<u8>,
image: ImageBuffer<P, &[P::Subpixel]>,
) -> Img<&'buf [RGBA8]>
where
P: Pixel + 'static,
Rgba<u8>: FromColor<P>,
{
let (width, height) = image.dimensions();
// TODO: conversion re-using the target buffer?
let image: ImageBuffer<Rgba<u8>, _> = image.convert();
*buf = image.into_raw();
Img::new(buf.as_pixels(), width as usize, height as usize)
}
// Cast the input slice using few buffer allocations if possible.
// In particular try not to allocate if the caller did the infallible reverse.
fn cast_buffer<Channel>(buf: &[u8]) -> ImageResult<Cow<[Channel]>>
where
Channel: Pod + Zero,
{
match try_cast_slice(buf) {
Ok(slice) => Ok(Cow::Borrowed(slice)),
Err(PodCastError::OutputSliceWouldHaveSlop) => Err(ImageError::Parameter(
ParameterError::from_kind(ParameterErrorKind::DimensionMismatch),
)),
Err(PodCastError::TargetAlignmentGreaterAndInputNotAligned) => {
// Sad, but let's allocate.
// bytemuck checks alignment _before_ slop but size mismatch before this..
if buf.len() % std::mem::size_of::<Channel>() != 0 {
Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::DimensionMismatch,
)))
} else {
let len = buf.len() / std::mem::size_of::<Channel>();
let mut data = vec![Channel::zero(); len];
let view = try_cast_slice_mut::<_, u8>(data.as_mut_slice()).unwrap();
view.copy_from_slice(buf);
Ok(Cow::Owned(data))
}
}
Err(err) => {
// Are you trying to encode a ZST??
Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::Generic(format!("{:?}", err)),
)))
}
}
}
match color {
ColorType::Rgb8 => {
// ravif doesn't do any checks but has some asserts, so we do the checks.
let img = try_from_raw::<Rgb<u8>>(data, width, height)?;
// Now, internally ravif uses u32 but it takes usize. We could do some checked
// conversion but instead we use that a non-empty image must be addressable.
if img.pixels().len() == 0 {
return Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::DimensionMismatch,
)));
}
Ok(RgbColor::Rgb8(Img::new(
rgb::AsPixels::as_pixels(data),
width as usize,
height as usize,
)))
}
ColorType::Rgba8 => {
// ravif doesn't do any checks but has some asserts, so we do the checks.
let img = try_from_raw::<Rgba<u8>>(data, width, height)?;
// Now, internally ravif uses u32 but it takes usize. We could do some checked
// conversion but instead we use that a non-empty image must be addressable.
if img.pixels().len() == 0 {
return Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::DimensionMismatch,
)));
}
Ok(RgbColor::Rgba8(Img::new(
rgb::AsPixels::as_pixels(data),
width as usize,
height as usize,
)))
}
// we need a separate buffer..
ColorType::L8 => {
let image = try_from_raw::<Luma<u8>>(data, width, height)?;
Ok(RgbColor::Rgba8(convert_into(fallback, image)))
}
ColorType::La8 => {
let image = try_from_raw::<LumaA<u8>>(data, width, height)?;
Ok(RgbColor::Rgba8(convert_into(fallback, image)))
}
// we need to really convert data..
ColorType::L16 => {
let buffer = cast_buffer(data)?;
let image = try_from_raw::<Luma<u16>>(&buffer, width, height)?;
Ok(RgbColor::Rgba8(convert_into(fallback, image)))
}
ColorType::La16 => {
let buffer = cast_buffer(data)?;
let image = try_from_raw::<LumaA<u16>>(&buffer, width, height)?;
Ok(RgbColor::Rgba8(convert_into(fallback, image)))
}
ColorType::Rgb16 => {
let buffer = cast_buffer(data)?;
let image = try_from_raw::<Rgb<u16>>(&buffer, width, height)?;
Ok(RgbColor::Rgba8(convert_into(fallback, image)))
}
ColorType::Rgba16 => {
let buffer = cast_buffer(data)?;
let image = try_from_raw::<Rgba<u16>>(&buffer, width, height)?;
Ok(RgbColor::Rgba8(convert_into(fallback, image)))
}
// for cases we do not support at all?
_ => Err(ImageError::Unsupported(
UnsupportedError::from_format_and_kind(
ImageFormat::Avif.into(),
UnsupportedErrorKind::Color(color.into()),
),
)),
}
}
}

14
vendor/image/src/codecs/avif/mod.rs vendored Normal file
View File

@@ -0,0 +1,14 @@
//! Encoding of AVIF images.
///
/// The [AVIF] specification defines an image derivative of the AV1 bitstream, an open video codec.
///
/// [AVIF]: https://aomediacodec.github.io/av1-avif/
#[cfg(feature = "avif-decoder")]
pub use self::decoder::AvifDecoder;
#[cfg(feature = "avif-encoder")]
pub use self::encoder::{AvifEncoder, ColorSpace};
#[cfg(feature = "avif-decoder")]
mod decoder;
#[cfg(feature = "avif-encoder")]
mod encoder;

1483
vendor/image/src/codecs/bmp/decoder.rs vendored Normal file

File diff suppressed because it is too large Load Diff

388
vendor/image/src/codecs/bmp/encoder.rs vendored Normal file
View File

@@ -0,0 +1,388 @@
use byteorder::{LittleEndian, WriteBytesExt};
use std::io::{self, Write};
use crate::error::{
EncodingError, ImageError, ImageFormatHint, ImageResult, ParameterError, ParameterErrorKind,
};
use crate::image::ImageEncoder;
use crate::{color, ImageFormat};
const BITMAPFILEHEADER_SIZE: u32 = 14;
const BITMAPINFOHEADER_SIZE: u32 = 40;
const BITMAPV4HEADER_SIZE: u32 = 108;
/// The representation of a BMP encoder.
pub struct BmpEncoder<'a, W: 'a> {
writer: &'a mut W,
}
impl<'a, W: Write + 'a> BmpEncoder<'a, W> {
/// Create a new encoder that writes its output to ```w```.
pub fn new(w: &'a mut W) -> Self {
BmpEncoder { writer: w }
}
/// Encodes the image ```image```
/// that has dimensions ```width``` and ```height```
/// and ```ColorType``` ```c```.
pub fn encode(
&mut self,
image: &[u8],
width: u32,
height: u32,
c: color::ColorType,
) -> ImageResult<()> {
self.encode_with_palette(image, width, height, c, None)
}
/// Same as ```encode```, but allow a palette to be passed in.
/// The ```palette``` is ignored for color types other than Luma/Luma-with-alpha.
pub fn encode_with_palette(
&mut self,
image: &[u8],
width: u32,
height: u32,
c: color::ColorType,
palette: Option<&[[u8; 3]]>,
) -> ImageResult<()> {
if palette.is_some() && c != color::ColorType::L8 && c != color::ColorType::La8 {
return Err(ImageError::IoError(io::Error::new(
io::ErrorKind::InvalidInput,
format!(
"Unsupported color type {:?} when using a non-empty palette. Supported types: Gray(8), GrayA(8).",
c
),
)));
}
let bmp_header_size = BITMAPFILEHEADER_SIZE;
let (dib_header_size, written_pixel_size, palette_color_count) =
get_pixel_info(c, palette)?;
let row_pad_size = (4 - (width * written_pixel_size) % 4) % 4; // each row must be padded to a multiple of 4 bytes
let image_size = width
.checked_mul(height)
.and_then(|v| v.checked_mul(written_pixel_size))
.and_then(|v| v.checked_add(height * row_pad_size))
.ok_or_else(|| {
ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::DimensionMismatch,
))
})?;
let palette_size = palette_color_count * 4; // all palette colors are BGRA
let file_size = bmp_header_size
.checked_add(dib_header_size)
.and_then(|v| v.checked_add(palette_size))
.and_then(|v| v.checked_add(image_size))
.ok_or_else(|| {
ImageError::Encoding(EncodingError::new(
ImageFormatHint::Exact(ImageFormat::Bmp),
"calculated BMP header size larger than 2^32",
))
})?;
// write BMP header
self.writer.write_u8(b'B')?;
self.writer.write_u8(b'M')?;
self.writer.write_u32::<LittleEndian>(file_size)?; // file size
self.writer.write_u16::<LittleEndian>(0)?; // reserved 1
self.writer.write_u16::<LittleEndian>(0)?; // reserved 2
self.writer
.write_u32::<LittleEndian>(bmp_header_size + dib_header_size + palette_size)?; // image data offset
// write DIB header
self.writer.write_u32::<LittleEndian>(dib_header_size)?;
self.writer.write_i32::<LittleEndian>(width as i32)?;
self.writer.write_i32::<LittleEndian>(height as i32)?;
self.writer.write_u16::<LittleEndian>(1)?; // color planes
self.writer
.write_u16::<LittleEndian>((written_pixel_size * 8) as u16)?; // bits per pixel
if dib_header_size >= BITMAPV4HEADER_SIZE {
// Assume BGRA32
self.writer.write_u32::<LittleEndian>(3)?; // compression method - bitfields
} else {
self.writer.write_u32::<LittleEndian>(0)?; // compression method - no compression
}
self.writer.write_u32::<LittleEndian>(image_size)?;
self.writer.write_i32::<LittleEndian>(0)?; // horizontal ppm
self.writer.write_i32::<LittleEndian>(0)?; // vertical ppm
self.writer.write_u32::<LittleEndian>(palette_color_count)?;
self.writer.write_u32::<LittleEndian>(0)?; // all colors are important
if dib_header_size >= BITMAPV4HEADER_SIZE {
// Assume BGRA32
self.writer.write_u32::<LittleEndian>(0xff << 16)?; // red mask
self.writer.write_u32::<LittleEndian>(0xff << 8)?; // green mask
self.writer.write_u32::<LittleEndian>(0xff)?; // blue mask
self.writer.write_u32::<LittleEndian>(0xff << 24)?; // alpha mask
self.writer.write_u32::<LittleEndian>(0x73524742)?; // colorspace - sRGB
// endpoints (3x3) and gamma (3)
for _ in 0..12 {
self.writer.write_u32::<LittleEndian>(0)?;
}
}
// write image data
match c {
color::ColorType::Rgb8 => self.encode_rgb(image, width, height, row_pad_size, 3)?,
color::ColorType::Rgba8 => self.encode_rgba(image, width, height, row_pad_size, 4)?,
color::ColorType::L8 => {
self.encode_gray(image, width, height, row_pad_size, 1, palette)?
}
color::ColorType::La8 => {
self.encode_gray(image, width, height, row_pad_size, 2, palette)?
}
_ => {
return Err(ImageError::IoError(io::Error::new(
io::ErrorKind::InvalidInput,
&get_unsupported_error_message(c)[..],
)))
}
}
Ok(())
}
fn encode_rgb(
&mut self,
image: &[u8],
width: u32,
height: u32,
row_pad_size: u32,
bytes_per_pixel: u32,
) -> io::Result<()> {
let width = width as usize;
let height = height as usize;
let x_stride = bytes_per_pixel as usize;
let y_stride = width * x_stride;
for row in (0..height).rev() {
// from the bottom up
let row_start = row * y_stride;
for px in image[row_start..][..y_stride].chunks_exact(x_stride) {
let r = px[0];
let g = px[1];
let b = px[2];
// written as BGR
self.writer.write_all(&[b, g, r])?;
}
self.write_row_pad(row_pad_size)?;
}
Ok(())
}
fn encode_rgba(
&mut self,
image: &[u8],
width: u32,
height: u32,
row_pad_size: u32,
bytes_per_pixel: u32,
) -> io::Result<()> {
let width = width as usize;
let height = height as usize;
let x_stride = bytes_per_pixel as usize;
let y_stride = width * x_stride;
for row in (0..height).rev() {
// from the bottom up
let row_start = row * y_stride;
for px in image[row_start..][..y_stride].chunks_exact(x_stride) {
let r = px[0];
let g = px[1];
let b = px[2];
let a = px[3];
// written as BGRA
self.writer.write_all(&[b, g, r, a])?;
}
self.write_row_pad(row_pad_size)?;
}
Ok(())
}
fn encode_gray(
&mut self,
image: &[u8],
width: u32,
height: u32,
row_pad_size: u32,
bytes_per_pixel: u32,
palette: Option<&[[u8; 3]]>,
) -> io::Result<()> {
// write grayscale palette
if let Some(palette) = palette {
for item in palette {
// each color is written as BGRA, where A is always 0
self.writer.write_all(&[item[2], item[1], item[0], 0])?;
}
} else {
for val in 0u8..=255 {
// each color is written as BGRA, where A is always 0 and since only grayscale is being written, B = G = R = index
self.writer.write_all(&[val, val, val, 0])?;
}
}
// write image data
let x_stride = bytes_per_pixel;
let y_stride = width * x_stride;
for row in (0..height).rev() {
// from the bottom up
let row_start = row * y_stride;
for col in 0..width {
let pixel_start = (row_start + (col * x_stride)) as usize;
// color value is equal to the palette index
self.writer.write_u8(image[pixel_start])?;
// alpha is never written as it's not widely supported
}
self.write_row_pad(row_pad_size)?;
}
Ok(())
}
fn write_row_pad(&mut self, row_pad_size: u32) -> io::Result<()> {
for _ in 0..row_pad_size {
self.writer.write_u8(0)?;
}
Ok(())
}
}
impl<'a, W: Write> ImageEncoder for BmpEncoder<'a, W> {
fn write_image(
mut self,
buf: &[u8],
width: u32,
height: u32,
color_type: color::ColorType,
) -> ImageResult<()> {
self.encode(buf, width, height, color_type)
}
}
fn get_unsupported_error_message(c: color::ColorType) -> String {
format!(
"Unsupported color type {:?}. Supported types: RGB(8), RGBA(8), Gray(8), GrayA(8).",
c
)
}
/// Returns a tuple representing: (dib header size, written pixel size, palette color count).
fn get_pixel_info(c: color::ColorType, palette: Option<&[[u8; 3]]>) -> io::Result<(u32, u32, u32)> {
let sizes = match c {
color::ColorType::Rgb8 => (BITMAPINFOHEADER_SIZE, 3, 0),
color::ColorType::Rgba8 => (BITMAPV4HEADER_SIZE, 4, 0),
color::ColorType::L8 => (
BITMAPINFOHEADER_SIZE,
1,
palette.map(|p| p.len()).unwrap_or(256) as u32,
),
color::ColorType::La8 => (
BITMAPINFOHEADER_SIZE,
1,
palette.map(|p| p.len()).unwrap_or(256) as u32,
),
_ => {
return Err(io::Error::new(
io::ErrorKind::InvalidInput,
&get_unsupported_error_message(c)[..],
))
}
};
Ok(sizes)
}
#[cfg(test)]
mod tests {
use super::super::BmpDecoder;
use super::BmpEncoder;
use crate::color::ColorType;
use crate::image::ImageDecoder;
use std::io::Cursor;
fn round_trip_image(image: &[u8], width: u32, height: u32, c: ColorType) -> Vec<u8> {
let mut encoded_data = Vec::new();
{
let mut encoder = BmpEncoder::new(&mut encoded_data);
encoder
.encode(&image, width, height, c)
.expect("could not encode image");
}
let decoder = BmpDecoder::new(Cursor::new(&encoded_data)).expect("failed to decode");
let mut buf = vec![0; decoder.total_bytes() as usize];
decoder.read_image(&mut buf).expect("failed to decode");
buf
}
#[test]
fn round_trip_single_pixel_rgb() {
let image = [255u8, 0, 0]; // single red pixel
let decoded = round_trip_image(&image, 1, 1, ColorType::Rgb8);
assert_eq!(3, decoded.len());
assert_eq!(255, decoded[0]);
assert_eq!(0, decoded[1]);
assert_eq!(0, decoded[2]);
}
#[test]
#[cfg(target_pointer_width = "64")]
fn huge_files_return_error() {
let mut encoded_data = Vec::new();
let image = vec![0u8; 3 * 40_000 * 40_000]; // 40_000x40_000 pixels, 3 bytes per pixel, allocated on the heap
let mut encoder = BmpEncoder::new(&mut encoded_data);
let result = encoder.encode(&image, 40_000, 40_000, ColorType::Rgb8);
assert!(result.is_err());
}
#[test]
fn round_trip_single_pixel_rgba() {
let image = [1, 2, 3, 4];
let decoded = round_trip_image(&image, 1, 1, ColorType::Rgba8);
assert_eq!(&decoded[..], &image[..]);
}
#[test]
fn round_trip_3px_rgb() {
let image = [0u8; 3 * 3 * 3]; // 3x3 pixels, 3 bytes per pixel
let _decoded = round_trip_image(&image, 3, 3, ColorType::Rgb8);
}
#[test]
fn round_trip_gray() {
let image = [0u8, 1, 2]; // 3 pixels
let decoded = round_trip_image(&image, 3, 1, ColorType::L8);
// should be read back as 3 RGB pixels
assert_eq!(9, decoded.len());
assert_eq!(0, decoded[0]);
assert_eq!(0, decoded[1]);
assert_eq!(0, decoded[2]);
assert_eq!(1, decoded[3]);
assert_eq!(1, decoded[4]);
assert_eq!(1, decoded[5]);
assert_eq!(2, decoded[6]);
assert_eq!(2, decoded[7]);
assert_eq!(2, decoded[8]);
}
#[test]
fn round_trip_graya() {
let image = [0u8, 0, 1, 0, 2, 0]; // 3 pixels, each with an alpha channel
let decoded = round_trip_image(&image, 1, 3, ColorType::La8);
// should be read back as 3 RGB pixels
assert_eq!(9, decoded.len());
assert_eq!(0, decoded[0]);
assert_eq!(0, decoded[1]);
assert_eq!(0, decoded[2]);
assert_eq!(1, decoded[3]);
assert_eq!(1, decoded[4]);
assert_eq!(1, decoded[5]);
assert_eq!(2, decoded[6]);
assert_eq!(2, decoded[7]);
assert_eq!(2, decoded[8]);
}
}

14
vendor/image/src/codecs/bmp/mod.rs vendored Normal file
View File

@@ -0,0 +1,14 @@
//! Decoding and Encoding of BMP Images
//!
//! A decoder and encoder for BMP (Windows Bitmap) images
//!
//! # Related Links
//! * <https://msdn.microsoft.com/en-us/library/windows/desktop/dd183375%28v=vs.85%29.aspx>
//! * <https://en.wikipedia.org/wiki/BMP_file_format>
//!
pub use self::decoder::BmpDecoder;
pub use self::encoder::BmpEncoder;
mod decoder;
mod encoder;

375
vendor/image/src/codecs/dds.rs vendored Normal file
View File

@@ -0,0 +1,375 @@
//! Decoding of DDS images
//!
//! DDS (DirectDraw Surface) is a container format for storing DXT (S3TC) compressed images.
//!
//! # Related Links
//! * <https://docs.microsoft.com/en-us/windows/win32/direct3ddds/dx-graphics-dds-pguide> - Description of the DDS format.
use std::io::Read;
use std::{error, fmt};
use byteorder::{LittleEndian, ReadBytesExt};
#[allow(deprecated)]
use crate::codecs::dxt::{DxtDecoder, DxtReader, DxtVariant};
use crate::color::ColorType;
use crate::error::{
DecodingError, ImageError, ImageFormatHint, ImageResult, UnsupportedError, UnsupportedErrorKind,
};
use crate::image::{ImageDecoder, ImageFormat};
/// Errors that can occur during decoding and parsing a DDS image
#[derive(Debug, Copy, Clone, Hash, PartialEq, Eq, PartialOrd, Ord)]
enum DecoderError {
/// Wrong DDS channel width
PixelFormatSizeInvalid(u32),
/// Wrong DDS header size
HeaderSizeInvalid(u32),
/// Wrong DDS header flags
HeaderFlagsInvalid(u32),
/// Invalid DXGI format in DX10 header
DxgiFormatInvalid(u32),
/// Invalid resource dimension
ResourceDimensionInvalid(u32),
/// Invalid flags in DX10 header
Dx10FlagsInvalid(u32),
/// Invalid array size in DX10 header
Dx10ArraySizeInvalid(u32),
/// DDS "DDS " signature invalid or missing
DdsSignatureInvalid,
}
impl fmt::Display for DecoderError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
DecoderError::PixelFormatSizeInvalid(s) => {
f.write_fmt(format_args!("Invalid DDS PixelFormat size: {}", s))
}
DecoderError::HeaderSizeInvalid(s) => {
f.write_fmt(format_args!("Invalid DDS header size: {}", s))
}
DecoderError::HeaderFlagsInvalid(fs) => {
f.write_fmt(format_args!("Invalid DDS header flags: {:#010X}", fs))
}
DecoderError::DxgiFormatInvalid(df) => {
f.write_fmt(format_args!("Invalid DDS DXGI format: {}", df))
}
DecoderError::ResourceDimensionInvalid(d) => {
f.write_fmt(format_args!("Invalid DDS resource dimension: {}", d))
}
DecoderError::Dx10FlagsInvalid(fs) => {
f.write_fmt(format_args!("Invalid DDS DX10 header flags: {:#010X}", fs))
}
DecoderError::Dx10ArraySizeInvalid(s) => {
f.write_fmt(format_args!("Invalid DDS DX10 array size: {}", s))
}
DecoderError::DdsSignatureInvalid => f.write_str("DDS signature not found"),
}
}
}
impl From<DecoderError> for ImageError {
fn from(e: DecoderError) -> ImageError {
ImageError::Decoding(DecodingError::new(ImageFormat::Dds.into(), e))
}
}
impl error::Error for DecoderError {}
/// Header used by DDS image files
#[derive(Debug)]
struct Header {
_flags: u32,
height: u32,
width: u32,
_pitch_or_linear_size: u32,
_depth: u32,
_mipmap_count: u32,
pixel_format: PixelFormat,
_caps: u32,
_caps2: u32,
}
/// Extended DX10 header used by some DDS image files
#[derive(Debug)]
struct DX10Header {
dxgi_format: u32,
resource_dimension: u32,
misc_flag: u32,
array_size: u32,
misc_flags_2: u32,
}
/// DDS pixel format
#[derive(Debug)]
struct PixelFormat {
flags: u32,
fourcc: [u8; 4],
_rgb_bit_count: u32,
_r_bit_mask: u32,
_g_bit_mask: u32,
_b_bit_mask: u32,
_a_bit_mask: u32,
}
impl PixelFormat {
fn from_reader(r: &mut dyn Read) -> ImageResult<Self> {
let size = r.read_u32::<LittleEndian>()?;
if size != 32 {
return Err(DecoderError::PixelFormatSizeInvalid(size).into());
}
Ok(Self {
flags: r.read_u32::<LittleEndian>()?,
fourcc: {
let mut v = [0; 4];
r.read_exact(&mut v)?;
v
},
_rgb_bit_count: r.read_u32::<LittleEndian>()?,
_r_bit_mask: r.read_u32::<LittleEndian>()?,
_g_bit_mask: r.read_u32::<LittleEndian>()?,
_b_bit_mask: r.read_u32::<LittleEndian>()?,
_a_bit_mask: r.read_u32::<LittleEndian>()?,
})
}
}
impl Header {
fn from_reader(r: &mut dyn Read) -> ImageResult<Self> {
let size = r.read_u32::<LittleEndian>()?;
if size != 124 {
return Err(DecoderError::HeaderSizeInvalid(size).into());
}
const REQUIRED_FLAGS: u32 = 0x1 | 0x2 | 0x4 | 0x1000;
const VALID_FLAGS: u32 = 0x1 | 0x2 | 0x4 | 0x8 | 0x1000 | 0x20000 | 0x80000 | 0x800000;
let flags = r.read_u32::<LittleEndian>()?;
if flags & (REQUIRED_FLAGS | !VALID_FLAGS) != REQUIRED_FLAGS {
return Err(DecoderError::HeaderFlagsInvalid(flags).into());
}
let height = r.read_u32::<LittleEndian>()?;
let width = r.read_u32::<LittleEndian>()?;
let pitch_or_linear_size = r.read_u32::<LittleEndian>()?;
let depth = r.read_u32::<LittleEndian>()?;
let mipmap_count = r.read_u32::<LittleEndian>()?;
// Skip `dwReserved1`
{
let mut skipped = [0; 4 * 11];
r.read_exact(&mut skipped)?;
}
let pixel_format = PixelFormat::from_reader(r)?;
let caps = r.read_u32::<LittleEndian>()?;
let caps2 = r.read_u32::<LittleEndian>()?;
// Skip `dwCaps3`, `dwCaps4`, `dwReserved2` (unused)
{
let mut skipped = [0; 4 + 4 + 4];
r.read_exact(&mut skipped)?;
}
Ok(Self {
_flags: flags,
height,
width,
_pitch_or_linear_size: pitch_or_linear_size,
_depth: depth,
_mipmap_count: mipmap_count,
pixel_format,
_caps: caps,
_caps2: caps2,
})
}
}
impl DX10Header {
fn from_reader(r: &mut dyn Read) -> ImageResult<Self> {
let dxgi_format = r.read_u32::<LittleEndian>()?;
let resource_dimension = r.read_u32::<LittleEndian>()?;
let misc_flag = r.read_u32::<LittleEndian>()?;
let array_size = r.read_u32::<LittleEndian>()?;
let misc_flags_2 = r.read_u32::<LittleEndian>()?;
let dx10_header = Self {
dxgi_format,
resource_dimension,
misc_flag,
array_size,
misc_flags_2,
};
dx10_header.validate()?;
Ok(dx10_header)
}
fn validate(&self) -> Result<(), ImageError> {
// Note: see https://docs.microsoft.com/en-us/windows/win32/direct3ddds/dds-header-dxt10 for info on valid values
if self.dxgi_format > 132 {
// Invalid format
return Err(DecoderError::DxgiFormatInvalid(self.dxgi_format).into());
}
if self.resource_dimension < 2 || self.resource_dimension > 4 {
// Invalid dimension
// Only 1D (2), 2D (3) and 3D (4) resource dimensions are allowed
return Err(DecoderError::ResourceDimensionInvalid(self.resource_dimension).into());
}
if self.misc_flag != 0x0 && self.misc_flag != 0x4 {
// Invalid flag
// Only no (0x0) and DDS_RESOURCE_MISC_TEXTURECUBE (0x4) flags are allowed
return Err(DecoderError::Dx10FlagsInvalid(self.misc_flag).into());
}
if self.resource_dimension == 4 && self.array_size != 1 {
// Invalid array size
// 3D textures (resource dimension == 4) must have an array size of 1
return Err(DecoderError::Dx10ArraySizeInvalid(self.array_size).into());
}
if self.misc_flags_2 > 0x4 {
// Invalid alpha flags
return Err(DecoderError::Dx10FlagsInvalid(self.misc_flags_2).into());
}
Ok(())
}
}
/// The representation of a DDS decoder
pub struct DdsDecoder<R: Read> {
#[allow(deprecated)]
inner: DxtDecoder<R>,
}
impl<R: Read> DdsDecoder<R> {
/// Create a new decoder that decodes from the stream `r`
pub fn new(mut r: R) -> ImageResult<Self> {
let mut magic = [0; 4];
r.read_exact(&mut magic)?;
if magic != b"DDS "[..] {
return Err(DecoderError::DdsSignatureInvalid.into());
}
let header = Header::from_reader(&mut r)?;
if header.pixel_format.flags & 0x4 != 0 {
#[allow(deprecated)]
let variant = match &header.pixel_format.fourcc {
b"DXT1" => DxtVariant::DXT1,
b"DXT3" => DxtVariant::DXT3,
b"DXT5" => DxtVariant::DXT5,
b"DX10" => {
let dx10_header = DX10Header::from_reader(&mut r)?;
// Format equivalents were taken from https://docs.microsoft.com/en-us/windows/win32/direct3d11/texture-block-compression-in-direct3d-11
// The enum integer values were taken from https://docs.microsoft.com/en-us/windows/win32/api/dxgiformat/ne-dxgiformat-dxgi_format
// DXT1 represents the different BC1 variants, DTX3 represents the different BC2 variants and DTX5 represents the different BC3 variants
match dx10_header.dxgi_format {
70 | 71 | 72 => DxtVariant::DXT1, // DXGI_FORMAT_BC1_TYPELESS, DXGI_FORMAT_BC1_UNORM or DXGI_FORMAT_BC1_UNORM_SRGB
73 | 74 | 75 => DxtVariant::DXT3, // DXGI_FORMAT_BC2_TYPELESS, DXGI_FORMAT_BC2_UNORM or DXGI_FORMAT_BC2_UNORM_SRGB
76 | 77 | 78 => DxtVariant::DXT5, // DXGI_FORMAT_BC3_TYPELESS, DXGI_FORMAT_BC3_UNORM or DXGI_FORMAT_BC3_UNORM_SRGB
_ => {
return Err(ImageError::Unsupported(
UnsupportedError::from_format_and_kind(
ImageFormat::Dds.into(),
UnsupportedErrorKind::GenericFeature(format!(
"DDS DXGI Format {}",
dx10_header.dxgi_format
)),
),
))
}
}
}
fourcc => {
return Err(ImageError::Unsupported(
UnsupportedError::from_format_and_kind(
ImageFormat::Dds.into(),
UnsupportedErrorKind::GenericFeature(format!(
"DDS FourCC {:?}",
fourcc
)),
),
))
}
};
#[allow(deprecated)]
let bytes_per_pixel = variant.color_type().bytes_per_pixel();
if crate::utils::check_dimension_overflow(header.width, header.height, bytes_per_pixel)
{
return Err(ImageError::Unsupported(
UnsupportedError::from_format_and_kind(
ImageFormat::Dds.into(),
UnsupportedErrorKind::GenericFeature(format!(
"Image dimensions ({}x{}) are too large",
header.width, header.height
)),
),
));
}
#[allow(deprecated)]
let inner = DxtDecoder::new(r, header.width, header.height, variant)?;
Ok(Self { inner })
} else {
// For now, supports only DXT variants
Err(ImageError::Unsupported(
UnsupportedError::from_format_and_kind(
ImageFormat::Dds.into(),
UnsupportedErrorKind::Format(ImageFormatHint::Name("DDS".to_string())),
),
))
}
}
}
impl<'a, R: 'a + Read> ImageDecoder<'a> for DdsDecoder<R> {
#[allow(deprecated)]
type Reader = DxtReader<R>;
fn dimensions(&self) -> (u32, u32) {
self.inner.dimensions()
}
fn color_type(&self) -> ColorType {
self.inner.color_type()
}
fn scanline_bytes(&self) -> u64 {
self.inner.scanline_bytes()
}
fn into_reader(self) -> ImageResult<Self::Reader> {
self.inner.into_reader()
}
fn read_image(self, buf: &mut [u8]) -> ImageResult<()> {
self.inner.read_image(buf)
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn dimension_overflow() {
// A DXT1 header set to 0xFFFF_FFFC width and height (the highest u32%4 == 0)
let header = vec![
0x44, 0x44, 0x53, 0x20, 0x7C, 0x0, 0x0, 0x0, 0x7, 0x10, 0x8, 0x0, 0xFC, 0xFF, 0xFF,
0xFF, 0xFC, 0xFF, 0xFF, 0xFF, 0x0, 0xC0, 0x12, 0x0, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0,
0x0, 0x49, 0x4D, 0x41, 0x47, 0x45, 0x4D, 0x41, 0x47, 0x49, 0x43, 0x4B, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x20, 0x0, 0x0, 0x0,
0x4, 0x0, 0x0, 0x0, 0x44, 0x58, 0x54, 0x31, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x10, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
];
assert!(DdsDecoder::new(&header[..]).is_err());
}
}

869
vendor/image/src/codecs/dxt.rs vendored Normal file
View File

@@ -0,0 +1,869 @@
//! Decoding of DXT (S3TC) compression
//!
//! DXT is an image format that supports lossy compression
//!
//! # Related Links
//! * <https://www.khronos.org/registry/OpenGL/extensions/EXT/EXT_texture_compression_s3tc.txt> - Description of the DXT compression OpenGL extensions.
//!
//! Note: this module only implements bare DXT encoding/decoding, it does not parse formats that can contain DXT files like .dds
use std::convert::TryFrom;
use std::io::{self, Read, Seek, SeekFrom, Write};
use crate::color::ColorType;
use crate::error::{ImageError, ImageResult, ParameterError, ParameterErrorKind};
use crate::image::{self, ImageDecoder, ImageDecoderRect, ImageReadBuffer, Progress};
/// What version of DXT compression are we using?
/// Note that DXT2 and DXT4 are left away as they're
/// just DXT3 and DXT5 with premultiplied alpha
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub enum DxtVariant {
/// The DXT1 format. 48 bytes of RGB data in a 4x4 pixel square is
/// compressed into an 8 byte block of DXT1 data
DXT1,
/// The DXT3 format. 64 bytes of RGBA data in a 4x4 pixel square is
/// compressed into a 16 byte block of DXT3 data
DXT3,
/// The DXT5 format. 64 bytes of RGBA data in a 4x4 pixel square is
/// compressed into a 16 byte block of DXT5 data
DXT5,
}
impl DxtVariant {
/// Returns the amount of bytes of raw image data
/// that is encoded in a single DXTn block
fn decoded_bytes_per_block(self) -> usize {
match self {
DxtVariant::DXT1 => 48,
DxtVariant::DXT3 | DxtVariant::DXT5 => 64,
}
}
/// Returns the amount of bytes per block of encoded DXTn data
fn encoded_bytes_per_block(self) -> usize {
match self {
DxtVariant::DXT1 => 8,
DxtVariant::DXT3 | DxtVariant::DXT5 => 16,
}
}
/// Returns the color type that is stored in this DXT variant
pub fn color_type(self) -> ColorType {
match self {
DxtVariant::DXT1 => ColorType::Rgb8,
DxtVariant::DXT3 | DxtVariant::DXT5 => ColorType::Rgba8,
}
}
}
/// DXT decoder
pub struct DxtDecoder<R: Read> {
inner: R,
width_blocks: u32,
height_blocks: u32,
variant: DxtVariant,
row: u32,
}
impl<R: Read> DxtDecoder<R> {
/// Create a new DXT decoder that decodes from the stream ```r```.
/// As DXT is often stored as raw buffers with the width/height
/// somewhere else the width and height of the image need
/// to be passed in ```width``` and ```height```, as well as the
/// DXT variant in ```variant```.
/// width and height are required to be powers of 2 and at least 4.
/// otherwise an error will be returned
pub fn new(
r: R,
width: u32,
height: u32,
variant: DxtVariant,
) -> Result<DxtDecoder<R>, ImageError> {
if width % 4 != 0 || height % 4 != 0 {
// TODO: this is actually a bit of a weird case. We could return `DecodingError` but
// it's not really the format that is wrong However, the encoder should surely return
// `EncodingError` so it would be the logical choice for symmetry.
return Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::DimensionMismatch,
)));
}
let width_blocks = width / 4;
let height_blocks = height / 4;
Ok(DxtDecoder {
inner: r,
width_blocks,
height_blocks,
variant,
row: 0,
})
}
fn read_scanline(&mut self, buf: &mut [u8]) -> io::Result<usize> {
assert_eq!(u64::try_from(buf.len()), Ok(self.scanline_bytes()));
let mut src =
vec![0u8; self.variant.encoded_bytes_per_block() * self.width_blocks as usize];
self.inner.read_exact(&mut src)?;
match self.variant {
DxtVariant::DXT1 => decode_dxt1_row(&src, buf),
DxtVariant::DXT3 => decode_dxt3_row(&src, buf),
DxtVariant::DXT5 => decode_dxt5_row(&src, buf),
}
self.row += 1;
Ok(buf.len())
}
}
// Note that, due to the way that DXT compression works, a scanline is considered to consist out of
// 4 lines of pixels.
impl<'a, R: 'a + Read> ImageDecoder<'a> for DxtDecoder<R> {
type Reader = DxtReader<R>;
fn dimensions(&self) -> (u32, u32) {
(self.width_blocks * 4, self.height_blocks * 4)
}
fn color_type(&self) -> ColorType {
self.variant.color_type()
}
fn scanline_bytes(&self) -> u64 {
self.variant.decoded_bytes_per_block() as u64 * u64::from(self.width_blocks)
}
fn into_reader(self) -> ImageResult<Self::Reader> {
Ok(DxtReader {
buffer: ImageReadBuffer::new(self.scanline_bytes(), self.total_bytes()),
decoder: self,
})
}
fn read_image(mut self, buf: &mut [u8]) -> ImageResult<()> {
assert_eq!(u64::try_from(buf.len()), Ok(self.total_bytes()));
for chunk in buf.chunks_mut(self.scanline_bytes().max(1) as usize) {
self.read_scanline(chunk)?;
}
Ok(())
}
}
impl<'a, R: 'a + Read + Seek> ImageDecoderRect<'a> for DxtDecoder<R> {
fn read_rect_with_progress<F: Fn(Progress)>(
&mut self,
x: u32,
y: u32,
width: u32,
height: u32,
buf: &mut [u8],
progress_callback: F,
) -> ImageResult<()> {
let encoded_scanline_bytes =
self.variant.encoded_bytes_per_block() as u64 * u64::from(self.width_blocks);
let start = self.inner.stream_position()?;
image::load_rect(
x,
y,
width,
height,
buf,
progress_callback,
self,
|s, scanline| {
s.inner
.seek(SeekFrom::Start(start + scanline * encoded_scanline_bytes))?;
Ok(())
},
|s, buf| s.read_scanline(buf).map(|_| ()),
)?;
self.inner.seek(SeekFrom::Start(start))?;
Ok(())
}
}
/// DXT reader
pub struct DxtReader<R: Read> {
buffer: ImageReadBuffer,
decoder: DxtDecoder<R>,
}
impl<R: Read> Read for DxtReader<R> {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
let decoder = &mut self.decoder;
self.buffer.read(buf, |buf| decoder.read_scanline(buf))
}
}
/// DXT encoder
pub struct DxtEncoder<W: Write> {
w: W,
}
impl<W: Write> DxtEncoder<W> {
/// Create a new encoder that writes its output to ```w```
pub fn new(w: W) -> DxtEncoder<W> {
DxtEncoder { w }
}
/// Encodes the image data ```data```
/// that has dimensions ```width``` and ```height```
/// in ```DxtVariant``` ```variant```
/// data is assumed to be in variant.color_type()
pub fn encode(
mut self,
data: &[u8],
width: u32,
height: u32,
variant: DxtVariant,
) -> ImageResult<()> {
if width % 4 != 0 || height % 4 != 0 {
// TODO: this is not very idiomatic yet. Should return an EncodingError.
return Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::DimensionMismatch,
)));
}
let width_blocks = width / 4;
let height_blocks = height / 4;
let stride = variant.decoded_bytes_per_block();
assert!(data.len() >= width_blocks as usize * height_blocks as usize * stride);
for chunk in data.chunks(width_blocks as usize * stride) {
let data = match variant {
DxtVariant::DXT1 => encode_dxt1_row(chunk),
DxtVariant::DXT3 => encode_dxt3_row(chunk),
DxtVariant::DXT5 => encode_dxt5_row(chunk),
};
self.w.write_all(&data)?;
}
Ok(())
}
}
/**
* Actual encoding/decoding logic below.
*/
use std::mem::swap;
type Rgb = [u8; 3];
/// decodes a 5-bit R, 6-bit G, 5-bit B 16-bit packed color value into 8-bit RGB
/// mapping is done so min/max range values are preserved. So for 5-bit
/// values 0x00 -> 0x00 and 0x1F -> 0xFF
fn enc565_decode(value: u16) -> Rgb {
let red = (value >> 11) & 0x1F;
let green = (value >> 5) & 0x3F;
let blue = (value) & 0x1F;
[
(red * 0xFF / 0x1F) as u8,
(green * 0xFF / 0x3F) as u8,
(blue * 0xFF / 0x1F) as u8,
]
}
/// encodes an 8-bit RGB value into a 5-bit R, 6-bit G, 5-bit B 16-bit packed color value
/// mapping preserves min/max values. It is guaranteed that i == encode(decode(i)) for all i
fn enc565_encode(rgb: Rgb) -> u16 {
let red = (u16::from(rgb[0]) * 0x1F + 0x7E) / 0xFF;
let green = (u16::from(rgb[1]) * 0x3F + 0x7E) / 0xFF;
let blue = (u16::from(rgb[2]) * 0x1F + 0x7E) / 0xFF;
(red << 11) | (green << 5) | blue
}
/// utility function: squares a value
fn square(a: i32) -> i32 {
a * a
}
/// returns the squared error between two RGB values
fn diff(a: Rgb, b: Rgb) -> i32 {
square(i32::from(a[0]) - i32::from(b[0]))
+ square(i32::from(a[1]) - i32::from(b[1]))
+ square(i32::from(a[2]) - i32::from(b[2]))
}
/*
* Functions for decoding DXT compression
*/
/// Constructs the DXT5 alpha lookup table from the two alpha entries
/// if alpha0 > alpha1, constructs a table of [a0, a1, 6 linearly interpolated values from a0 to a1]
/// if alpha0 <= alpha1, constructs a table of [a0, a1, 4 linearly interpolated values from a0 to a1, 0, 0xFF]
fn alpha_table_dxt5(alpha0: u8, alpha1: u8) -> [u8; 8] {
let mut table = [alpha0, alpha1, 0, 0, 0, 0, 0, 0xFF];
if alpha0 > alpha1 {
for i in 2..8u16 {
table[i as usize] =
(((8 - i) * u16::from(alpha0) + (i - 1) * u16::from(alpha1)) / 7) as u8;
}
} else {
for i in 2..6u16 {
table[i as usize] =
(((6 - i) * u16::from(alpha0) + (i - 1) * u16::from(alpha1)) / 5) as u8;
}
}
table
}
/// decodes an 8-byte dxt color block into the RGB channels of a 16xRGB or 16xRGBA block.
/// source should have a length of 8, dest a length of 48 (RGB) or 64 (RGBA)
fn decode_dxt_colors(source: &[u8], dest: &mut [u8], is_dxt1: bool) {
// sanity checks, also enable the compiler to elide all following bound checks
assert!(source.len() == 8 && (dest.len() == 48 || dest.len() == 64));
// calculate pitch to store RGB values in dest (3 for RGB, 4 for RGBA)
let pitch = dest.len() / 16;
// extract color data
let color0 = u16::from(source[0]) | (u16::from(source[1]) << 8);
let color1 = u16::from(source[2]) | (u16::from(source[3]) << 8);
let color_table = u32::from(source[4])
| (u32::from(source[5]) << 8)
| (u32::from(source[6]) << 16)
| (u32::from(source[7]) << 24);
// let color_table = source[4..8].iter().rev().fold(0, |t, &b| (t << 8) | b as u32);
// decode the colors to rgb format
let mut colors = [[0; 3]; 4];
colors[0] = enc565_decode(color0);
colors[1] = enc565_decode(color1);
// determine color interpolation method
if color0 > color1 || !is_dxt1 {
// linearly interpolate the other two color table entries
for i in 0..3 {
colors[2][i] = ((u16::from(colors[0][i]) * 2 + u16::from(colors[1][i]) + 1) / 3) as u8;
colors[3][i] = ((u16::from(colors[0][i]) + u16::from(colors[1][i]) * 2 + 1) / 3) as u8;
}
} else {
// linearly interpolate one other entry, keep the other at 0
for i in 0..3 {
colors[2][i] = ((u16::from(colors[0][i]) + u16::from(colors[1][i]) + 1) / 2) as u8;
}
}
// serialize the result. Every color is determined by looking up
// two bits in color_table which identify which color to actually pick from the 4 possible colors
for i in 0..16 {
dest[i * pitch..i * pitch + 3]
.copy_from_slice(&colors[(color_table >> (i * 2)) as usize & 3]);
}
}
/// Decodes a 16-byte bock of dxt5 data to a 16xRGBA block
fn decode_dxt5_block(source: &[u8], dest: &mut [u8]) {
assert!(source.len() == 16 && dest.len() == 64);
// extract alpha index table (stored as little endian 64-bit value)
let alpha_table = source[2..8]
.iter()
.rev()
.fold(0, |t, &b| (t << 8) | u64::from(b));
// alhpa level decode
let alphas = alpha_table_dxt5(source[0], source[1]);
// serialize alpha
for i in 0..16 {
dest[i * 4 + 3] = alphas[(alpha_table >> (i * 3)) as usize & 7];
}
// handle colors
decode_dxt_colors(&source[8..16], dest, false);
}
/// Decodes a 16-byte bock of dxt3 data to a 16xRGBA block
fn decode_dxt3_block(source: &[u8], dest: &mut [u8]) {
assert!(source.len() == 16 && dest.len() == 64);
// extract alpha index table (stored as little endian 64-bit value)
let alpha_table = source[0..8]
.iter()
.rev()
.fold(0, |t, &b| (t << 8) | u64::from(b));
// serialize alpha (stored as 4-bit values)
for i in 0..16 {
dest[i * 4 + 3] = ((alpha_table >> (i * 4)) as u8 & 0xF) * 0x11;
}
// handle colors
decode_dxt_colors(&source[8..16], dest, false);
}
/// Decodes a 8-byte bock of dxt5 data to a 16xRGB block
fn decode_dxt1_block(source: &[u8], dest: &mut [u8]) {
assert!(source.len() == 8 && dest.len() == 48);
decode_dxt_colors(source, dest, true);
}
/// Decode a row of DXT1 data to four rows of RGB data.
/// source.len() should be a multiple of 8, otherwise this panics.
fn decode_dxt1_row(source: &[u8], dest: &mut [u8]) {
assert!(source.len() % 8 == 0);
let block_count = source.len() / 8;
assert!(dest.len() >= block_count * 48);
// contains the 16 decoded pixels per block
let mut decoded_block = [0u8; 48];
for (x, encoded_block) in source.chunks(8).enumerate() {
decode_dxt1_block(encoded_block, &mut decoded_block);
// copy the values from the decoded block to linewise RGB layout
for line in 0..4 {
let offset = (block_count * line + x) * 12;
dest[offset..offset + 12].copy_from_slice(&decoded_block[line * 12..(line + 1) * 12]);
}
}
}
/// Decode a row of DXT3 data to four rows of RGBA data.
/// source.len() should be a multiple of 16, otherwise this panics.
fn decode_dxt3_row(source: &[u8], dest: &mut [u8]) {
assert!(source.len() % 16 == 0);
let block_count = source.len() / 16;
assert!(dest.len() >= block_count * 64);
// contains the 16 decoded pixels per block
let mut decoded_block = [0u8; 64];
for (x, encoded_block) in source.chunks(16).enumerate() {
decode_dxt3_block(encoded_block, &mut decoded_block);
// copy the values from the decoded block to linewise RGB layout
for line in 0..4 {
let offset = (block_count * line + x) * 16;
dest[offset..offset + 16].copy_from_slice(&decoded_block[line * 16..(line + 1) * 16]);
}
}
}
/// Decode a row of DXT5 data to four rows of RGBA data.
/// source.len() should be a multiple of 16, otherwise this panics.
fn decode_dxt5_row(source: &[u8], dest: &mut [u8]) {
assert!(source.len() % 16 == 0);
let block_count = source.len() / 16;
assert!(dest.len() >= block_count * 64);
// contains the 16 decoded pixels per block
let mut decoded_block = [0u8; 64];
for (x, encoded_block) in source.chunks(16).enumerate() {
decode_dxt5_block(encoded_block, &mut decoded_block);
// copy the values from the decoded block to linewise RGB layout
for line in 0..4 {
let offset = (block_count * line + x) * 16;
dest[offset..offset + 16].copy_from_slice(&decoded_block[line * 16..(line + 1) * 16]);
}
}
}
/*
* Functions for encoding DXT compression
*/
/// Tries to perform the color encoding part of dxt compression
/// the approach taken is simple, it picks unique combinations
/// of the colors present in the block, and attempts to encode the
/// block with each, picking the encoding that yields the least
/// squared error out of all of them.
///
/// This could probably be faster but is already reasonably fast
/// and a good reference impl to optimize others against.
///
/// Another way to perform this analysis would be to perform a
/// singular value decomposition of the different colors, and
/// then pick 2 points on this line as the base colors. But
/// this is still rather unwieldy math and has issues
/// with the 3-linear-colors-and-0 case, it's also worse
/// at conserving the original colors.
///
/// source: should be RGBAx16 or RGBx16 bytes of data,
/// dest 8 bytes of resulting encoded color data
fn encode_dxt_colors(source: &[u8], dest: &mut [u8], is_dxt1: bool) {
// sanity checks and determine stride when parsing the source data
assert!((source.len() == 64 || source.len() == 48) && dest.len() == 8);
let stride = source.len() / 16;
// reference colors array
let mut colors = [[0u8; 3]; 4];
// Put the colors we're going to be processing in an array with pure RGB layout
// note: we reverse the pixel order here. The reason for this is found in the inner quantization loop.
let mut targets = [[0u8; 3]; 16];
for (s, d) in source.chunks(stride).rev().zip(&mut targets) {
*d = [s[0], s[1], s[2]];
}
// roundtrip all colors through the r5g6b5 encoding
for rgb in &mut targets {
*rgb = enc565_decode(enc565_encode(*rgb));
}
// and deduplicate the set of colors to choose from as the algorithm is O(N^2) in this
let mut colorspace_ = [[0u8; 3]; 16];
let mut colorspace_len = 0;
for color in &targets {
if !colorspace_[..colorspace_len].contains(color) {
colorspace_[colorspace_len] = *color;
colorspace_len += 1;
}
}
let mut colorspace = &colorspace_[..colorspace_len];
// in case of slight gradients it can happen that there's only one entry left in the color table.
// as the resulting banding can be quite bad if we would just left the block at the closest
// encodable color, we have a special path here that tries to emulate the wanted color
// using the linear interpolation between gradients
if colorspace.len() == 1 {
// the base color we got from colorspace reduction
let ref_rgb = colorspace[0];
// the unreduced color in this block that's the furthest away from the actual block
let mut rgb = targets
.iter()
.cloned()
.max_by_key(|rgb| diff(*rgb, ref_rgb))
.unwrap();
// amplify differences by 2.5, which should push them to the next quantized value
// if possible without overshoot
for i in 0..3 {
rgb[i] =
((i16::from(rgb[i]) - i16::from(ref_rgb[i])) * 5 / 2 + i16::from(ref_rgb[i])) as u8;
}
// roundtrip it through quantization
let encoded = enc565_encode(rgb);
let rgb = enc565_decode(encoded);
// in case this didn't land us a different color the best way to represent this field is
// as a single color block
if rgb == ref_rgb {
dest[0] = encoded as u8;
dest[1] = (encoded >> 8) as u8;
for d in dest.iter_mut().take(8).skip(2) {
*d = 0;
}
return;
}
// we did find a separate value: add it to the options so after one round of quantization
// we're done
colorspace_[1] = rgb;
colorspace = &colorspace_[..2];
}
// block quantization loop: we basically just try every possible combination, returning
// the combination with the least squared error
// stores the best candidate colors
let mut chosen_colors = [[0; 3]; 4];
// did this index table use the [0,0,0] variant
let mut chosen_use_0 = false;
// error calculated for the last entry
let mut chosen_error = 0xFFFF_FFFFu32;
// loop through unique permutations of the colorspace, where c1 != c2
'search: for (i, &c1) in colorspace.iter().enumerate() {
colors[0] = c1;
for &c2 in &colorspace[0..i] {
colors[1] = c2;
if is_dxt1 {
// what's inside here is ran at most 120 times.
for use_0 in 0..2 {
// and 240 times here.
if use_0 != 0 {
// interpolate one color, set the other to 0
for i in 0..3 {
colors[2][i] =
((u16::from(colors[0][i]) + u16::from(colors[1][i]) + 1) / 2) as u8;
}
colors[3] = [0, 0, 0];
} else {
// interpolate to get 2 more colors
for i in 0..3 {
colors[2][i] =
((u16::from(colors[0][i]) * 2 + u16::from(colors[1][i]) + 1) / 3)
as u8;
colors[3][i] =
((u16::from(colors[0][i]) + u16::from(colors[1][i]) * 2 + 1) / 3)
as u8;
}
}
// calculate the total error if we were to quantize the block with these color combinations
// both these loops have statically known iteration counts and are well vectorizable
// note that the inside of this can be run about 15360 times worst case, i.e. 960 times per
// pixel.
let total_error = targets
.iter()
.map(|t| colors.iter().map(|c| diff(*c, *t) as u32).min().unwrap())
.sum();
// update the match if we found a better one
if total_error < chosen_error {
chosen_colors = colors;
chosen_use_0 = use_0 != 0;
chosen_error = total_error;
// if we've got a perfect or at most 1 LSB off match, we're done
if total_error < 4 {
break 'search;
}
}
}
} else {
// what's inside here is ran at most 120 times.
// interpolate to get 2 more colors
for i in 0..3 {
colors[2][i] =
((u16::from(colors[0][i]) * 2 + u16::from(colors[1][i]) + 1) / 3) as u8;
colors[3][i] =
((u16::from(colors[0][i]) + u16::from(colors[1][i]) * 2 + 1) / 3) as u8;
}
// calculate the total error if we were to quantize the block with these color combinations
// both these loops have statically known iteration counts and are well vectorizable
// note that the inside of this can be run about 15360 times worst case, i.e. 960 times per
// pixel.
let total_error = targets
.iter()
.map(|t| colors.iter().map(|c| diff(*c, *t) as u32).min().unwrap())
.sum();
// update the match if we found a better one
if total_error < chosen_error {
chosen_colors = colors;
chosen_error = total_error;
// if we've got a perfect or at most 1 LSB off match, we're done
if total_error < 4 {
break 'search;
}
}
}
}
}
// calculate the final indices
// note that targets is already in reverse pixel order, to make the index computation easy.
let mut chosen_indices = 0u32;
for t in &targets {
let (idx, _) = chosen_colors
.iter()
.enumerate()
.min_by_key(|&(_, c)| diff(*c, *t))
.unwrap();
chosen_indices = (chosen_indices << 2) | idx as u32;
}
// encode the colors
let mut color0 = enc565_encode(chosen_colors[0]);
let mut color1 = enc565_encode(chosen_colors[1]);
// determine encoding. Note that color0 == color1 is impossible at this point
if is_dxt1 {
if color0 > color1 {
if chosen_use_0 {
swap(&mut color0, &mut color1);
// Indexes are packed 2 bits wide, swap index 0/1 but preserve 2/3.
let filter = (chosen_indices & 0xAAAA_AAAA) >> 1;
chosen_indices ^= filter ^ 0x5555_5555;
}
} else if !chosen_use_0 {
swap(&mut color0, &mut color1);
// Indexes are packed 2 bits wide, swap index 0/1 and 2/3.
chosen_indices ^= 0x5555_5555;
}
}
// encode everything.
dest[0] = color0 as u8;
dest[1] = (color0 >> 8) as u8;
dest[2] = color1 as u8;
dest[3] = (color1 >> 8) as u8;
for i in 0..4 {
dest[i + 4] = (chosen_indices >> (i * 8)) as u8;
}
}
/// Encodes a buffer of 16 alpha bytes into a dxt5 alpha index table,
/// where the alpha table they are indexed against is created by
/// calling alpha_table_dxt5(alpha0, alpha1)
/// returns the resulting error and alpha table
fn encode_dxt5_alpha(alpha0: u8, alpha1: u8, alphas: &[u8; 16]) -> (i32, u64) {
// create a table for the given alpha ranges
let table = alpha_table_dxt5(alpha0, alpha1);
let mut indices = 0u64;
let mut total_error = 0i32;
// least error brute force search
for (i, &a) in alphas.iter().enumerate() {
let (index, error) = table
.iter()
.enumerate()
.map(|(i, &e)| (i, square(i32::from(e) - i32::from(a))))
.min_by_key(|&(_, e)| e)
.unwrap();
total_error += error;
indices |= (index as u64) << (i * 3);
}
(total_error, indices)
}
/// Encodes a RGBAx16 sequence of bytes to a 16 bytes DXT5 block
fn encode_dxt5_block(source: &[u8], dest: &mut [u8]) {
assert!(source.len() == 64 && dest.len() == 16);
// perform dxt color encoding
encode_dxt_colors(source, &mut dest[8..16], false);
// copy out the alpha bytes
let mut alphas = [0; 16];
for i in 0..16 {
alphas[i] = source[i * 4 + 3];
}
// try both alpha compression methods, see which has the least error.
let alpha07 = alphas.iter().cloned().min().unwrap();
let alpha17 = alphas.iter().cloned().max().unwrap();
let (error7, indices7) = encode_dxt5_alpha(alpha07, alpha17, &alphas);
// if all alphas are 0 or 255 it doesn't particularly matter what we do here.
let alpha05 = alphas
.iter()
.cloned()
.filter(|&i| i != 255)
.max()
.unwrap_or(255);
let alpha15 = alphas
.iter()
.cloned()
.filter(|&i| i != 0)
.min()
.unwrap_or(0);
let (error5, indices5) = encode_dxt5_alpha(alpha05, alpha15, &alphas);
// pick the best one, encode the min/max values
let mut alpha_table = if error5 < error7 {
dest[0] = alpha05;
dest[1] = alpha15;
indices5
} else {
dest[0] = alpha07;
dest[1] = alpha17;
indices7
};
// encode the alphas
for byte in dest[2..8].iter_mut() {
*byte = alpha_table as u8;
alpha_table >>= 8;
}
}
/// Encodes a RGBAx16 sequence of bytes into a 16 bytes DXT3 block
fn encode_dxt3_block(source: &[u8], dest: &mut [u8]) {
assert!(source.len() == 64 && dest.len() == 16);
// perform dxt color encoding
encode_dxt_colors(source, &mut dest[8..16], false);
// DXT3 alpha compression is very simple, just round towards the nearest value
// index the alpha values into the 64bit alpha table
let mut alpha_table = 0u64;
for i in 0..16 {
let alpha = u64::from(source[i * 4 + 3]);
let alpha = (alpha + 0x8) / 0x11;
alpha_table |= alpha << (i * 4);
}
// encode the alpha values
for byte in &mut dest[0..8] {
*byte = alpha_table as u8;
alpha_table >>= 8;
}
}
/// Encodes a RGBx16 sequence of bytes into a 8 bytes DXT1 block
fn encode_dxt1_block(source: &[u8], dest: &mut [u8]) {
assert!(source.len() == 48 && dest.len() == 8);
// perform dxt color encoding
encode_dxt_colors(source, dest, true);
}
/// Decode a row of DXT1 data to four rows of RGBA data.
/// source.len() should be a multiple of 8, otherwise this panics.
fn encode_dxt1_row(source: &[u8]) -> Vec<u8> {
assert!(source.len() % 48 == 0);
let block_count = source.len() / 48;
let mut dest = vec![0u8; block_count * 8];
// contains the 16 decoded pixels per block
let mut decoded_block = [0u8; 48];
for (x, encoded_block) in dest.chunks_mut(8).enumerate() {
// copy the values from the decoded block to linewise RGB layout
for line in 0..4 {
let offset = (block_count * line + x) * 12;
decoded_block[line * 12..(line + 1) * 12].copy_from_slice(&source[offset..offset + 12]);
}
encode_dxt1_block(&decoded_block, encoded_block);
}
dest
}
/// Decode a row of DXT3 data to four rows of RGBA data.
/// source.len() should be a multiple of 16, otherwise this panics.
fn encode_dxt3_row(source: &[u8]) -> Vec<u8> {
assert!(source.len() % 64 == 0);
let block_count = source.len() / 64;
let mut dest = vec![0u8; block_count * 16];
// contains the 16 decoded pixels per block
let mut decoded_block = [0u8; 64];
for (x, encoded_block) in dest.chunks_mut(16).enumerate() {
// copy the values from the decoded block to linewise RGB layout
for line in 0..4 {
let offset = (block_count * line + x) * 16;
decoded_block[line * 16..(line + 1) * 16].copy_from_slice(&source[offset..offset + 16]);
}
encode_dxt3_block(&decoded_block, encoded_block);
}
dest
}
/// Decode a row of DXT5 data to four rows of RGBA data.
/// source.len() should be a multiple of 16, otherwise this panics.
fn encode_dxt5_row(source: &[u8]) -> Vec<u8> {
assert!(source.len() % 64 == 0);
let block_count = source.len() / 64;
let mut dest = vec![0u8; block_count * 16];
// contains the 16 decoded pixels per block
let mut decoded_block = [0u8; 64];
for (x, encoded_block) in dest.chunks_mut(16).enumerate() {
// copy the values from the decoded block to linewise RGB layout
for line in 0..4 {
let offset = (block_count * line + x) * 16;
decoded_block[line * 16..(line + 1) * 16].copy_from_slice(&source[offset..offset + 16]);
}
encode_dxt5_block(&decoded_block, encoded_block);
}
dest
}

400
vendor/image/src/codecs/farbfeld.rs vendored Normal file
View File

@@ -0,0 +1,400 @@
//! Decoding of farbfeld images
//!
//! farbfeld is a lossless image format which is easy to parse, pipe and compress.
//!
//! It has the following format:
//!
//! | Bytes | Description |
//! |--------|---------------------------------------------------------|
//! | 8 | "farbfeld" magic value |
//! | 4 | 32-Bit BE unsigned integer (width) |
//! | 4 | 32-Bit BE unsigned integer (height) |
//! | [2222] | 4⋅16-Bit BE unsigned integers [RGBA] / pixel, row-major |
//!
//! The RGB-data should be sRGB for best interoperability and not alpha-premultiplied.
//!
//! # Related Links
//! * <https://tools.suckless.org/farbfeld/> - the farbfeld specification
use std::convert::TryFrom;
use std::i64;
use std::io::{self, Read, Seek, SeekFrom, Write};
use byteorder::{BigEndian, ByteOrder, NativeEndian};
use crate::color::ColorType;
use crate::error::{
DecodingError, ImageError, ImageResult, UnsupportedError, UnsupportedErrorKind,
};
use crate::image::{self, ImageDecoder, ImageDecoderRect, ImageEncoder, ImageFormat, Progress};
/// farbfeld Reader
pub struct FarbfeldReader<R: Read> {
width: u32,
height: u32,
inner: R,
/// Relative to the start of the pixel data
current_offset: u64,
cached_byte: Option<u8>,
}
impl<R: Read> FarbfeldReader<R> {
fn new(mut buffered_read: R) -> ImageResult<FarbfeldReader<R>> {
fn read_dimm<R: Read>(from: &mut R) -> ImageResult<u32> {
let mut buf = [0u8; 4];
from.read_exact(&mut buf).map_err(|err| {
ImageError::Decoding(DecodingError::new(ImageFormat::Farbfeld.into(), err))
})?;
Ok(BigEndian::read_u32(&buf))
}
let mut magic = [0u8; 8];
buffered_read.read_exact(&mut magic).map_err(|err| {
ImageError::Decoding(DecodingError::new(ImageFormat::Farbfeld.into(), err))
})?;
if &magic != b"farbfeld" {
return Err(ImageError::Decoding(DecodingError::new(
ImageFormat::Farbfeld.into(),
format!("Invalid magic: {:02x?}", magic),
)));
}
let reader = FarbfeldReader {
width: read_dimm(&mut buffered_read)?,
height: read_dimm(&mut buffered_read)?,
inner: buffered_read,
current_offset: 0,
cached_byte: None,
};
if crate::utils::check_dimension_overflow(
reader.width,
reader.height,
// ColorType is always rgba16
ColorType::Rgba16.bytes_per_pixel(),
) {
return Err(ImageError::Unsupported(
UnsupportedError::from_format_and_kind(
ImageFormat::Farbfeld.into(),
UnsupportedErrorKind::GenericFeature(format!(
"Image dimensions ({}x{}) are too large",
reader.width, reader.height
)),
),
));
}
Ok(reader)
}
}
impl<R: Read> Read for FarbfeldReader<R> {
fn read(&mut self, mut buf: &mut [u8]) -> io::Result<usize> {
let mut bytes_written = 0;
if let Some(byte) = self.cached_byte.take() {
buf[0] = byte;
buf = &mut buf[1..];
bytes_written = 1;
self.current_offset += 1;
}
if buf.len() == 1 {
buf[0] = cache_byte(&mut self.inner, &mut self.cached_byte)?;
bytes_written += 1;
self.current_offset += 1;
} else {
for channel_out in buf.chunks_exact_mut(2) {
consume_channel(&mut self.inner, channel_out)?;
bytes_written += 2;
self.current_offset += 2;
}
}
Ok(bytes_written)
}
}
impl<R: Read + Seek> Seek for FarbfeldReader<R> {
fn seek(&mut self, pos: SeekFrom) -> io::Result<u64> {
fn parse_offset(original_offset: u64, end_offset: u64, pos: SeekFrom) -> Option<i64> {
match pos {
SeekFrom::Start(off) => i64::try_from(off)
.ok()?
.checked_sub(i64::try_from(original_offset).ok()?),
SeekFrom::End(off) => {
if off < i64::try_from(end_offset).unwrap_or(i64::MAX) {
None
} else {
Some(i64::try_from(end_offset.checked_sub(original_offset)?).ok()? + off)
}
}
SeekFrom::Current(off) => {
if off < i64::try_from(original_offset).unwrap_or(i64::MAX) {
None
} else {
Some(off)
}
}
}
}
let original_offset = self.current_offset;
let end_offset = self.width as u64 * self.height as u64 * 2;
let offset_from_current =
parse_offset(original_offset, end_offset, pos).ok_or_else(|| {
io::Error::new(
io::ErrorKind::InvalidInput,
"invalid seek to a negative or overflowing position",
)
})?;
// TODO: convert to seek_relative() once that gets stabilised
self.inner.seek(SeekFrom::Current(offset_from_current))?;
self.current_offset = if offset_from_current < 0 {
original_offset.checked_sub(offset_from_current.wrapping_neg() as u64)
} else {
original_offset.checked_add(offset_from_current as u64)
}
.expect("This should've been checked above");
if self.current_offset < end_offset && self.current_offset % 2 == 1 {
let curr = self.inner.seek(SeekFrom::Current(-1))?;
cache_byte(&mut self.inner, &mut self.cached_byte)?;
self.inner.seek(SeekFrom::Start(curr))?;
} else {
self.cached_byte = None;
}
Ok(original_offset)
}
}
fn consume_channel<R: Read>(from: &mut R, to: &mut [u8]) -> io::Result<()> {
let mut ibuf = [0u8; 2];
from.read_exact(&mut ibuf)?;
NativeEndian::write_u16(to, BigEndian::read_u16(&ibuf));
Ok(())
}
fn cache_byte<R: Read>(from: &mut R, cached_byte: &mut Option<u8>) -> io::Result<u8> {
let mut obuf = [0u8; 2];
consume_channel(from, &mut obuf)?;
*cached_byte = Some(obuf[1]);
Ok(obuf[0])
}
/// farbfeld decoder
pub struct FarbfeldDecoder<R: Read> {
reader: FarbfeldReader<R>,
}
impl<R: Read> FarbfeldDecoder<R> {
/// Creates a new decoder that decodes from the stream ```r```
pub fn new(buffered_read: R) -> ImageResult<FarbfeldDecoder<R>> {
Ok(FarbfeldDecoder {
reader: FarbfeldReader::new(buffered_read)?,
})
}
}
impl<'a, R: 'a + Read> ImageDecoder<'a> for FarbfeldDecoder<R> {
type Reader = FarbfeldReader<R>;
fn dimensions(&self) -> (u32, u32) {
(self.reader.width, self.reader.height)
}
fn color_type(&self) -> ColorType {
ColorType::Rgba16
}
fn into_reader(self) -> ImageResult<Self::Reader> {
Ok(self.reader)
}
fn scanline_bytes(&self) -> u64 {
2
}
}
impl<'a, R: 'a + Read + Seek> ImageDecoderRect<'a> for FarbfeldDecoder<R> {
fn read_rect_with_progress<F: Fn(Progress)>(
&mut self,
x: u32,
y: u32,
width: u32,
height: u32,
buf: &mut [u8],
progress_callback: F,
) -> ImageResult<()> {
// A "scanline" (defined as "shortest non-caching read" in the doc) is just one channel in this case
let start = self.reader.stream_position()?;
image::load_rect(
x,
y,
width,
height,
buf,
progress_callback,
self,
|s, scanline| s.reader.seek(SeekFrom::Start(scanline * 2)).map(|_| ()),
|s, buf| s.reader.read_exact(buf),
)?;
self.reader.seek(SeekFrom::Start(start))?;
Ok(())
}
}
/// farbfeld encoder
pub struct FarbfeldEncoder<W: Write> {
w: W,
}
impl<W: Write> FarbfeldEncoder<W> {
/// Create a new encoder that writes its output to ```w```. The writer should be buffered.
pub fn new(buffered_writer: W) -> FarbfeldEncoder<W> {
FarbfeldEncoder { w: buffered_writer }
}
/// Encodes the image ```data``` (native endian)
/// that has dimensions ```width``` and ```height```
pub fn encode(self, data: &[u8], width: u32, height: u32) -> ImageResult<()> {
self.encode_impl(data, width, height)?;
Ok(())
}
fn encode_impl(mut self, data: &[u8], width: u32, height: u32) -> io::Result<()> {
self.w.write_all(b"farbfeld")?;
let mut buf = [0u8; 4];
BigEndian::write_u32(&mut buf, width);
self.w.write_all(&buf)?;
BigEndian::write_u32(&mut buf, height);
self.w.write_all(&buf)?;
for channel in data.chunks_exact(2) {
BigEndian::write_u16(&mut buf, NativeEndian::read_u16(channel));
self.w.write_all(&buf[..2])?;
}
Ok(())
}
}
impl<W: Write> ImageEncoder for FarbfeldEncoder<W> {
fn write_image(
self,
buf: &[u8],
width: u32,
height: u32,
color_type: ColorType,
) -> ImageResult<()> {
if color_type != ColorType::Rgba16 {
return Err(ImageError::Unsupported(
UnsupportedError::from_format_and_kind(
ImageFormat::Farbfeld.into(),
UnsupportedErrorKind::Color(color_type.into()),
),
));
}
self.encode(buf, width, height)
}
}
#[cfg(test)]
mod tests {
use crate::codecs::farbfeld::FarbfeldDecoder;
use crate::ImageDecoderRect;
use byteorder::{ByteOrder, NativeEndian};
use std::io::{Cursor, Seek, SeekFrom};
static RECTANGLE_IN: &[u8] = b"farbfeld\
\x00\x00\x00\x02\x00\x00\x00\x03\
\xFF\x01\xFE\x02\xFD\x03\xFC\x04\xFB\x05\xFA\x06\xF9\x07\xF8\x08\
\xF7\x09\xF6\x0A\xF5\x0B\xF4\x0C\xF3\x0D\xF2\x0E\xF1\x0F\xF0\x10\
\xEF\x11\xEE\x12\xED\x13\xEC\x14\xEB\x15\xEA\x16\xE9\x17\xE8\x18";
#[test]
fn read_rect_1x2() {
static RECTANGLE_OUT: &[u16] = &[
0xF30D, 0xF20E, 0xF10F, 0xF010, 0xEB15, 0xEA16, 0xE917, 0xE818,
];
read_rect(1, 1, 1, 2, RECTANGLE_OUT);
}
#[test]
fn read_rect_2x2() {
static RECTANGLE_OUT: &[u16] = &[
0xFF01, 0xFE02, 0xFD03, 0xFC04, 0xFB05, 0xFA06, 0xF907, 0xF808, 0xF709, 0xF60A, 0xF50B,
0xF40C, 0xF30D, 0xF20E, 0xF10F, 0xF010,
];
read_rect(0, 0, 2, 2, RECTANGLE_OUT);
}
#[test]
fn read_rect_2x1() {
static RECTANGLE_OUT: &[u16] = &[
0xEF11, 0xEE12, 0xED13, 0xEC14, 0xEB15, 0xEA16, 0xE917, 0xE818,
];
read_rect(0, 2, 2, 1, RECTANGLE_OUT);
}
#[test]
fn read_rect_2x3() {
static RECTANGLE_OUT: &[u16] = &[
0xFF01, 0xFE02, 0xFD03, 0xFC04, 0xFB05, 0xFA06, 0xF907, 0xF808, 0xF709, 0xF60A, 0xF50B,
0xF40C, 0xF30D, 0xF20E, 0xF10F, 0xF010, 0xEF11, 0xEE12, 0xED13, 0xEC14, 0xEB15, 0xEA16,
0xE917, 0xE818,
];
read_rect(0, 0, 2, 3, RECTANGLE_OUT);
}
#[test]
fn read_rect_in_stream() {
static RECTANGLE_OUT: &[u16] = &[0xEF11, 0xEE12, 0xED13, 0xEC14];
let mut input = vec![];
input.extend_from_slice(b"This is a 31-byte-long prologue");
input.extend_from_slice(RECTANGLE_IN);
let mut input_cur = Cursor::new(input);
input_cur.seek(SeekFrom::Start(31)).unwrap();
let mut out_buf = [0u8; 64];
FarbfeldDecoder::new(input_cur)
.unwrap()
.read_rect(0, 2, 1, 1, &mut out_buf)
.unwrap();
let exp = degenerate_pixels(RECTANGLE_OUT);
assert_eq!(&out_buf[..exp.len()], &exp[..]);
}
#[test]
fn dimension_overflow() {
let header = b"farbfeld\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF";
assert!(FarbfeldDecoder::new(Cursor::new(header)).is_err());
}
fn read_rect(x: u32, y: u32, width: u32, height: u32, exp_wide: &[u16]) {
let mut out_buf = [0u8; 64];
FarbfeldDecoder::new(Cursor::new(RECTANGLE_IN))
.unwrap()
.read_rect(x, y, width, height, &mut out_buf)
.unwrap();
let exp = degenerate_pixels(exp_wide);
assert_eq!(&out_buf[..exp.len()], &exp[..]);
}
fn degenerate_pixels(exp_wide: &[u16]) -> Vec<u8> {
let mut exp = vec![0u8; exp_wide.len() * 2];
NativeEndian::write_u16_into(exp_wide, &mut exp);
exp
}
}

606
vendor/image/src/codecs/gif.rs vendored Normal file
View File

@@ -0,0 +1,606 @@
//! Decoding of GIF Images
//!
//! GIF (Graphics Interchange Format) is an image format that supports lossless compression.
//!
//! # Related Links
//! * <http://www.w3.org/Graphics/GIF/spec-gif89a.txt> - The GIF Specification
//!
//! # Examples
//! ```rust,no_run
//! use image::codecs::gif::{GifDecoder, GifEncoder};
//! use image::{ImageDecoder, AnimationDecoder};
//! use std::fs::File;
//! # fn main() -> std::io::Result<()> {
//! // Decode a gif into frames
//! let file_in = File::open("foo.gif")?;
//! let mut decoder = GifDecoder::new(file_in).unwrap();
//! let frames = decoder.into_frames();
//! let frames = frames.collect_frames().expect("error decoding gif");
//!
//! // Encode frames into a gif and save to a file
//! let mut file_out = File::open("out.gif")?;
//! let mut encoder = GifEncoder::new(file_out);
//! encoder.encode_frames(frames.into_iter());
//! # Ok(())
//! # }
//! ```
#![allow(clippy::while_let_loop)]
use std::convert::TryFrom;
use std::convert::TryInto;
use std::io::{self, Cursor, Read, Write};
use std::marker::PhantomData;
use std::mem;
use gif::ColorOutput;
use gif::{DisposalMethod, Frame};
use num_rational::Ratio;
use crate::animation;
use crate::color::{ColorType, Rgba};
use crate::error::{
DecodingError, EncodingError, ImageError, ImageResult, ParameterError, ParameterErrorKind,
UnsupportedError, UnsupportedErrorKind,
};
use crate::image::{self, AnimationDecoder, ImageDecoder, ImageFormat};
use crate::io::Limits;
use crate::traits::Pixel;
use crate::ImageBuffer;
/// GIF decoder
pub struct GifDecoder<R: Read> {
reader: gif::Decoder<R>,
limits: Limits,
}
impl<R: Read> GifDecoder<R> {
/// Creates a new decoder that decodes the input steam `r`
pub fn new(r: R) -> ImageResult<GifDecoder<R>> {
let mut decoder = gif::DecodeOptions::new();
decoder.set_color_output(ColorOutput::RGBA);
Ok(GifDecoder {
reader: decoder.read_info(r).map_err(ImageError::from_decoding)?,
limits: Limits::default(),
})
}
/// Creates a new decoder that decodes the input steam `r`, using limits `limits`
pub fn with_limits(r: R, limits: Limits) -> ImageResult<GifDecoder<R>> {
let mut decoder = gif::DecodeOptions::new();
decoder.set_color_output(ColorOutput::RGBA);
Ok(GifDecoder {
reader: decoder.read_info(r).map_err(ImageError::from_decoding)?,
limits,
})
}
}
/// Wrapper struct around a `Cursor<Vec<u8>>`
pub struct GifReader<R>(Cursor<Vec<u8>>, PhantomData<R>);
impl<R> Read for GifReader<R> {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
self.0.read(buf)
}
fn read_to_end(&mut self, buf: &mut Vec<u8>) -> io::Result<usize> {
if self.0.position() == 0 && buf.is_empty() {
mem::swap(buf, self.0.get_mut());
Ok(buf.len())
} else {
self.0.read_to_end(buf)
}
}
}
impl<'a, R: 'a + Read> ImageDecoder<'a> for GifDecoder<R> {
type Reader = GifReader<R>;
fn dimensions(&self) -> (u32, u32) {
(
u32::from(self.reader.width()),
u32::from(self.reader.height()),
)
}
fn color_type(&self) -> ColorType {
ColorType::Rgba8
}
fn into_reader(self) -> ImageResult<Self::Reader> {
Ok(GifReader(
Cursor::new(image::decoder_to_vec(self)?),
PhantomData,
))
}
fn read_image(mut self, buf: &mut [u8]) -> ImageResult<()> {
assert_eq!(u64::try_from(buf.len()), Ok(self.total_bytes()));
let frame = match self
.reader
.next_frame_info()
.map_err(ImageError::from_decoding)?
{
Some(frame) => FrameInfo::new_from_frame(frame),
None => {
return Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::NoMoreData,
)))
}
};
let (width, height) = self.dimensions();
if frame.left == 0
&& frame.width == width
&& (frame.top as u64 + frame.height as u64 <= height as u64)
{
// If the frame matches the logical screen, or, as a more general case,
// fits into it and touches its left and right borders, then
// we can directly write it into the buffer without causing line wraparound.
let line_length = usize::try_from(width)
.unwrap()
.checked_mul(self.color_type().bytes_per_pixel() as usize)
.unwrap();
// isolate the portion of the buffer to read the frame data into.
// the chunks above and below it are going to be zeroed.
let (blank_top, rest) =
buf.split_at_mut(line_length.checked_mul(frame.top as usize).unwrap());
let (buf, blank_bottom) =
rest.split_at_mut(line_length.checked_mul(frame.height as usize).unwrap());
debug_assert_eq!(buf.len(), self.reader.buffer_size());
// this is only necessary in case the buffer is not zeroed
for b in blank_top {
*b = 0;
}
// fill the middle section with the frame data
self.reader
.read_into_buffer(buf)
.map_err(ImageError::from_decoding)?;
// this is only necessary in case the buffer is not zeroed
for b in blank_bottom {
*b = 0;
}
} else {
// If the frame does not match the logical screen, read into an extra buffer
// and 'insert' the frame from left/top to logical screen width/height.
let buffer_size = self.reader.buffer_size();
self.limits.reserve_usize(buffer_size)?;
let mut frame_buffer = vec![0; buffer_size];
self.limits.free_usize(buffer_size);
self.reader
.read_into_buffer(&mut frame_buffer[..])
.map_err(ImageError::from_decoding)?;
let frame_buffer = ImageBuffer::from_raw(frame.width, frame.height, frame_buffer);
let image_buffer = ImageBuffer::from_raw(width, height, buf);
// `buffer_size` uses wrapping arithmetic, thus might not report the
// correct storage requirement if the result does not fit in `usize`.
// `ImageBuffer::from_raw` detects overflow and reports by returning `None`.
if frame_buffer.is_none() || image_buffer.is_none() {
return Err(ImageError::Unsupported(
UnsupportedError::from_format_and_kind(
ImageFormat::Gif.into(),
UnsupportedErrorKind::GenericFeature(format!(
"Image dimensions ({}, {}) are too large",
frame.width, frame.height
)),
),
));
}
let frame_buffer = frame_buffer.unwrap();
let mut image_buffer = image_buffer.unwrap();
for (x, y, pixel) in image_buffer.enumerate_pixels_mut() {
let frame_x = x.wrapping_sub(frame.left);
let frame_y = y.wrapping_sub(frame.top);
if frame_x < frame.width && frame_y < frame.height {
*pixel = *frame_buffer.get_pixel(frame_x, frame_y);
} else {
// this is only necessary in case the buffer is not zeroed
*pixel = Rgba([0, 0, 0, 0]);
}
}
}
Ok(())
}
}
struct GifFrameIterator<R: Read> {
reader: gif::Decoder<R>,
width: u32,
height: u32,
non_disposed_frame: ImageBuffer<Rgba<u8>, Vec<u8>>,
}
impl<R: Read> GifFrameIterator<R> {
fn new(decoder: GifDecoder<R>) -> GifFrameIterator<R> {
let (width, height) = decoder.dimensions();
// intentionally ignore the background color for web compatibility
// create the first non disposed frame
let non_disposed_frame = ImageBuffer::from_pixel(width, height, Rgba([0, 0, 0, 0]));
GifFrameIterator {
reader: decoder.reader,
width,
height,
non_disposed_frame,
}
}
}
impl<R: Read> Iterator for GifFrameIterator<R> {
type Item = ImageResult<animation::Frame>;
fn next(&mut self) -> Option<ImageResult<animation::Frame>> {
// begin looping over each frame
let frame = match self.reader.next_frame_info() {
Ok(frame_info) => {
if let Some(frame) = frame_info {
FrameInfo::new_from_frame(frame)
} else {
// no more frames
return None;
}
}
Err(err) => return Some(Err(ImageError::from_decoding(err))),
};
let mut vec = vec![0; self.reader.buffer_size()];
if let Err(err) = self.reader.read_into_buffer(&mut vec) {
return Some(Err(ImageError::from_decoding(err)));
}
// create the image buffer from the raw frame.
// `buffer_size` uses wrapping arithmetic, thus might not report the
// correct storage requirement if the result does not fit in `usize`.
// on the other hand, `ImageBuffer::from_raw` detects overflow and
// reports by returning `None`.
let mut frame_buffer = match ImageBuffer::from_raw(frame.width, frame.height, vec) {
Some(frame_buffer) => frame_buffer,
None => {
return Some(Err(ImageError::Unsupported(
UnsupportedError::from_format_and_kind(
ImageFormat::Gif.into(),
UnsupportedErrorKind::GenericFeature(format!(
"Image dimensions ({}, {}) are too large",
frame.width, frame.height
)),
),
)))
}
};
// blend the current frame with the non-disposed frame, then update
// the non-disposed frame according to the disposal method.
fn blend_and_dispose_pixel(
dispose: DisposalMethod,
previous: &mut Rgba<u8>,
current: &mut Rgba<u8>,
) {
let pixel_alpha = current.channels()[3];
if pixel_alpha == 0 {
*current = *previous;
}
match dispose {
DisposalMethod::Any | DisposalMethod::Keep => {
// do not dispose
// (keep pixels from this frame)
// note: the `Any` disposal method is underspecified in the GIF
// spec, but most viewers treat it identically to `Keep`
*previous = *current;
}
DisposalMethod::Background => {
// restore to background color
// (background shows through transparent pixels in the next frame)
*previous = Rgba([0, 0, 0, 0]);
}
DisposalMethod::Previous => {
// restore to previous
// (dispose frames leaving the last none disposal frame)
}
}
}
// if `frame_buffer`'s frame exactly matches the entire image, then
// use it directly, else create a new buffer to hold the composited
// image.
let image_buffer = if (frame.left, frame.top) == (0, 0)
&& (self.width, self.height) == frame_buffer.dimensions()
{
for (x, y, pixel) in frame_buffer.enumerate_pixels_mut() {
let previous_pixel = self.non_disposed_frame.get_pixel_mut(x, y);
blend_and_dispose_pixel(frame.disposal_method, previous_pixel, pixel);
}
frame_buffer
} else {
ImageBuffer::from_fn(self.width, self.height, |x, y| {
let frame_x = x.wrapping_sub(frame.left);
let frame_y = y.wrapping_sub(frame.top);
let previous_pixel = self.non_disposed_frame.get_pixel_mut(x, y);
if frame_x < frame_buffer.width() && frame_y < frame_buffer.height() {
let mut pixel = *frame_buffer.get_pixel(frame_x, frame_y);
blend_and_dispose_pixel(frame.disposal_method, previous_pixel, &mut pixel);
pixel
} else {
// out of bounds, return pixel from previous frame
*previous_pixel
}
})
};
Some(Ok(animation::Frame::from_parts(
image_buffer,
0,
0,
frame.delay,
)))
}
}
impl<'a, R: Read + 'a> AnimationDecoder<'a> for GifDecoder<R> {
fn into_frames(self) -> animation::Frames<'a> {
animation::Frames::new(Box::new(GifFrameIterator::new(self)))
}
}
struct FrameInfo {
left: u32,
top: u32,
width: u32,
height: u32,
disposal_method: DisposalMethod,
delay: animation::Delay,
}
impl FrameInfo {
fn new_from_frame(frame: &Frame) -> FrameInfo {
FrameInfo {
left: u32::from(frame.left),
top: u32::from(frame.top),
width: u32::from(frame.width),
height: u32::from(frame.height),
disposal_method: frame.dispose,
// frame.delay is in units of 10ms so frame.delay*10 is in ms
delay: animation::Delay::from_ratio(Ratio::new(u32::from(frame.delay) * 10, 1)),
}
}
}
/// Number of repetitions for a GIF animation
#[derive(Clone, Copy, Debug)]
pub enum Repeat {
/// Finite number of repetitions
Finite(u16),
/// Looping GIF
Infinite,
}
impl Repeat {
pub(crate) fn to_gif_enum(&self) -> gif::Repeat {
match self {
Repeat::Finite(n) => gif::Repeat::Finite(*n),
Repeat::Infinite => gif::Repeat::Infinite,
}
}
}
/// GIF encoder.
pub struct GifEncoder<W: Write> {
w: Option<W>,
gif_encoder: Option<gif::Encoder<W>>,
speed: i32,
repeat: Option<Repeat>,
}
impl<W: Write> GifEncoder<W> {
/// Creates a new GIF encoder with a speed of 1. This prioritizes quality over performance at any cost.
pub fn new(w: W) -> GifEncoder<W> {
Self::new_with_speed(w, 1)
}
/// Create a new GIF encoder, and has the speed parameter `speed`. See
/// [`Frame::from_rgba_speed`](https://docs.rs/gif/latest/gif/struct.Frame.html#method.from_rgba_speed)
/// for more information.
pub fn new_with_speed(w: W, speed: i32) -> GifEncoder<W> {
assert!(
(1..=30).contains(&speed),
"speed needs to be in the range [1, 30]"
);
GifEncoder {
w: Some(w),
gif_encoder: None,
speed,
repeat: None,
}
}
/// Set the repeat behaviour of the encoded GIF
pub fn set_repeat(&mut self, repeat: Repeat) -> ImageResult<()> {
if let Some(ref mut encoder) = self.gif_encoder {
encoder
.set_repeat(repeat.to_gif_enum())
.map_err(ImageError::from_encoding)?;
}
self.repeat = Some(repeat);
Ok(())
}
/// Encode a single image.
pub fn encode(
&mut self,
data: &[u8],
width: u32,
height: u32,
color: ColorType,
) -> ImageResult<()> {
let (width, height) = self.gif_dimensions(width, height)?;
match color {
ColorType::Rgb8 => self.encode_gif(Frame::from_rgb(width, height, data)),
ColorType::Rgba8 => {
self.encode_gif(Frame::from_rgba(width, height, &mut data.to_owned()))
}
_ => Err(ImageError::Unsupported(
UnsupportedError::from_format_and_kind(
ImageFormat::Gif.into(),
UnsupportedErrorKind::Color(color.into()),
),
)),
}
}
/// Encode one frame of animation.
pub fn encode_frame(&mut self, img_frame: animation::Frame) -> ImageResult<()> {
let frame = self.convert_frame(img_frame)?;
self.encode_gif(frame)
}
/// Encodes Frames.
/// Consider using `try_encode_frames` instead to encode an `animation::Frames` like iterator.
pub fn encode_frames<F>(&mut self, frames: F) -> ImageResult<()>
where
F: IntoIterator<Item = animation::Frame>,
{
for img_frame in frames {
self.encode_frame(img_frame)?;
}
Ok(())
}
/// Try to encode a collection of `ImageResult<animation::Frame>` objects.
/// Use this function to encode an `animation::Frames` like iterator.
/// Whenever an `Err` item is encountered, that value is returned without further actions.
pub fn try_encode_frames<F>(&mut self, frames: F) -> ImageResult<()>
where
F: IntoIterator<Item = ImageResult<animation::Frame>>,
{
for img_frame in frames {
self.encode_frame(img_frame?)?;
}
Ok(())
}
pub(crate) fn convert_frame(
&mut self,
img_frame: animation::Frame,
) -> ImageResult<Frame<'static>> {
// get the delay before converting img_frame
let frame_delay = img_frame.delay().into_ratio().to_integer();
// convert img_frame into RgbaImage
let mut rbga_frame = img_frame.into_buffer();
let (width, height) = self.gif_dimensions(rbga_frame.width(), rbga_frame.height())?;
// Create the gif::Frame from the animation::Frame
let mut frame = Frame::from_rgba_speed(width, height, &mut rbga_frame, self.speed);
// Saturate the conversion to u16::MAX instead of returning an error as that
// would require a new special cased variant in ParameterErrorKind which most
// likely couldn't be reused for other cases. This isn't a bad trade-off given
// that the current algorithm is already lossy.
frame.delay = (frame_delay / 10).try_into().unwrap_or(std::u16::MAX);
Ok(frame)
}
fn gif_dimensions(&self, width: u32, height: u32) -> ImageResult<(u16, u16)> {
fn inner_dimensions(width: u32, height: u32) -> Option<(u16, u16)> {
let width = u16::try_from(width).ok()?;
let height = u16::try_from(height).ok()?;
Some((width, height))
}
// TODO: this is not very idiomatic yet. Should return an EncodingError.
inner_dimensions(width, height).ok_or_else(|| {
ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::DimensionMismatch,
))
})
}
pub(crate) fn encode_gif(&mut self, mut frame: Frame) -> ImageResult<()> {
let gif_encoder;
if let Some(ref mut encoder) = self.gif_encoder {
gif_encoder = encoder;
} else {
let writer = self.w.take().unwrap();
let mut encoder = gif::Encoder::new(writer, frame.width, frame.height, &[])
.map_err(ImageError::from_encoding)?;
if let Some(ref repeat) = self.repeat {
encoder
.set_repeat(repeat.to_gif_enum())
.map_err(ImageError::from_encoding)?;
}
self.gif_encoder = Some(encoder);
gif_encoder = self.gif_encoder.as_mut().unwrap()
}
frame.dispose = gif::DisposalMethod::Background;
gif_encoder
.write_frame(&frame)
.map_err(ImageError::from_encoding)
}
}
impl ImageError {
fn from_decoding(err: gif::DecodingError) -> ImageError {
use gif::DecodingError::*;
match err {
err @ Format(_) => {
ImageError::Decoding(DecodingError::new(ImageFormat::Gif.into(), err))
}
Io(io_err) => ImageError::IoError(io_err),
}
}
fn from_encoding(err: gif::EncodingError) -> ImageError {
use gif::EncodingError::*;
match err {
err @ Format(_) => {
ImageError::Encoding(EncodingError::new(ImageFormat::Gif.into(), err))
}
Io(io_err) => ImageError::IoError(io_err),
}
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn frames_exceeding_logical_screen_size() {
// This is a gif with 10x10 logical screen, but a 16x16 frame + 6px offset inside.
let data = vec![
0x47, 0x49, 0x46, 0x38, 0x39, 0x61, 0x0A, 0x00, 0x0A, 0x00, 0xF0, 0x00, 0x00, 0x00,
0x00, 0x00, 0x0E, 0xFF, 0x1F, 0x21, 0xF9, 0x04, 0x09, 0x64, 0x00, 0x00, 0x00, 0x2C,
0x06, 0x00, 0x06, 0x00, 0x10, 0x00, 0x10, 0x00, 0x00, 0x02, 0x23, 0x84, 0x8F, 0xA9,
0xBB, 0xE1, 0xE8, 0x42, 0x8A, 0x0F, 0x50, 0x79, 0xAE, 0xD1, 0xF9, 0x7A, 0xE8, 0x71,
0x5B, 0x48, 0x81, 0x64, 0xD5, 0x91, 0xCA, 0x89, 0x4D, 0x21, 0x63, 0x89, 0x4C, 0x09,
0x77, 0xF5, 0x6D, 0x14, 0x00, 0x3B,
];
let decoder = GifDecoder::new(Cursor::new(data)).unwrap();
let mut buf = vec![0u8; decoder.total_bytes() as usize];
assert!(decoder.read_image(&mut buf).is_ok());
}
}

1033
vendor/image/src/codecs/hdr/decoder.rs vendored Normal file

File diff suppressed because it is too large Load Diff

433
vendor/image/src/codecs/hdr/encoder.rs vendored Normal file
View File

@@ -0,0 +1,433 @@
use crate::codecs::hdr::{rgbe8, Rgbe8Pixel, SIGNATURE};
use crate::color::Rgb;
use crate::error::ImageResult;
use std::cmp::Ordering;
use std::io::{Result, Write};
/// Radiance HDR encoder
pub struct HdrEncoder<W: Write> {
w: W,
}
impl<W: Write> HdrEncoder<W> {
/// Creates encoder
pub fn new(w: W) -> HdrEncoder<W> {
HdrEncoder { w }
}
/// Encodes the image ```data```
/// that has dimensions ```width``` and ```height```
pub fn encode(mut self, data: &[Rgb<f32>], width: usize, height: usize) -> ImageResult<()> {
assert!(data.len() >= width * height);
let w = &mut self.w;
w.write_all(SIGNATURE)?;
w.write_all(b"\n")?;
w.write_all(b"# Rust HDR encoder\n")?;
w.write_all(b"FORMAT=32-bit_rle_rgbe\n\n")?;
w.write_all(format!("-Y {} +X {}\n", height, width).as_bytes())?;
if !(8..=32_768).contains(&width) {
for &pix in data {
write_rgbe8(w, to_rgbe8(pix))?;
}
} else {
// new RLE marker contains scanline width
let marker = rgbe8(2, 2, (width / 256) as u8, (width % 256) as u8);
// buffers for encoded pixels
let mut bufr = vec![0; width];
let mut bufg = vec![0; width];
let mut bufb = vec![0; width];
let mut bufe = vec![0; width];
let mut rle_buf = vec![0; width];
for scanline in data.chunks(width) {
for ((((r, g), b), e), &pix) in bufr
.iter_mut()
.zip(bufg.iter_mut())
.zip(bufb.iter_mut())
.zip(bufe.iter_mut())
.zip(scanline.iter())
{
let cp = to_rgbe8(pix);
*r = cp.c[0];
*g = cp.c[1];
*b = cp.c[2];
*e = cp.e;
}
write_rgbe8(w, marker)?; // New RLE encoding marker
rle_buf.clear();
rle_compress(&bufr[..], &mut rle_buf);
w.write_all(&rle_buf[..])?;
rle_buf.clear();
rle_compress(&bufg[..], &mut rle_buf);
w.write_all(&rle_buf[..])?;
rle_buf.clear();
rle_compress(&bufb[..], &mut rle_buf);
w.write_all(&rle_buf[..])?;
rle_buf.clear();
rle_compress(&bufe[..], &mut rle_buf);
w.write_all(&rle_buf[..])?;
}
}
Ok(())
}
}
#[derive(Debug, PartialEq, Eq)]
enum RunOrNot {
Run(u8, usize),
Norun(usize, usize),
}
use self::RunOrNot::{Norun, Run};
const RUN_MAX_LEN: usize = 127;
const NORUN_MAX_LEN: usize = 128;
struct RunIterator<'a> {
data: &'a [u8],
curidx: usize,
}
impl<'a> RunIterator<'a> {
fn new(data: &'a [u8]) -> RunIterator<'a> {
RunIterator { data, curidx: 0 }
}
}
impl<'a> Iterator for RunIterator<'a> {
type Item = RunOrNot;
fn next(&mut self) -> Option<Self::Item> {
if self.curidx == self.data.len() {
None
} else {
let cv = self.data[self.curidx];
let crun = self.data[self.curidx..]
.iter()
.take_while(|&&v| v == cv)
.take(RUN_MAX_LEN)
.count();
let ret = if crun > 2 {
Run(cv, crun)
} else {
Norun(self.curidx, crun)
};
self.curidx += crun;
Some(ret)
}
}
}
struct NorunCombineIterator<'a> {
runiter: RunIterator<'a>,
prev: Option<RunOrNot>,
}
impl<'a> NorunCombineIterator<'a> {
fn new(data: &'a [u8]) -> NorunCombineIterator<'a> {
NorunCombineIterator {
runiter: RunIterator::new(data),
prev: None,
}
}
}
// Combines sequential noruns produced by RunIterator
impl<'a> Iterator for NorunCombineIterator<'a> {
type Item = RunOrNot;
fn next(&mut self) -> Option<Self::Item> {
loop {
match self.prev.take() {
Some(Run(c, len)) => {
// Just return stored run
return Some(Run(c, len));
}
Some(Norun(idx, len)) => {
// Let's see if we need to continue norun
match self.runiter.next() {
Some(Norun(_, len1)) => {
// norun continues
let clen = len + len1; // combined length
match clen.cmp(&NORUN_MAX_LEN) {
Ordering::Equal => return Some(Norun(idx, clen)),
Ordering::Greater => {
// combined norun exceeds maximum length. store extra part of norun
self.prev =
Some(Norun(idx + NORUN_MAX_LEN, clen - NORUN_MAX_LEN));
// then return maximal norun
return Some(Norun(idx, NORUN_MAX_LEN));
}
Ordering::Less => {
// len + len1 < NORUN_MAX_LEN
self.prev = Some(Norun(idx, len + len1));
// combine and continue loop
}
}
}
Some(Run(c, len1)) => {
// Run encountered. Store it
self.prev = Some(Run(c, len1));
return Some(Norun(idx, len)); // and return combined norun
}
None => {
// End of sequence
return Some(Norun(idx, len)); // return combined norun
}
}
} // End match self.prev.take() == Some(NoRun())
None => {
// No norun to combine
match self.runiter.next() {
Some(Norun(idx, len)) => {
self.prev = Some(Norun(idx, len));
// store for combine and continue the loop
}
Some(Run(c, len)) => {
// Some run. Just return it
return Some(Run(c, len));
}
None => {
// That's all, folks
return None;
}
}
} // End match self.prev.take() == None
} // End match
} // End loop
}
}
// Appends RLE compressed ```data``` to ```rle```
fn rle_compress(data: &[u8], rle: &mut Vec<u8>) {
rle.clear();
if data.is_empty() {
rle.push(0); // Technically correct. It means read next 0 bytes.
return;
}
// Task: split data into chunks of repeating (max 127) and non-repeating bytes (max 128)
// Prepend non-repeating chunk with its length
// Replace repeating byte with (run length + 128) and the byte
for rnr in NorunCombineIterator::new(data) {
match rnr {
Run(c, len) => {
assert!(len <= 127);
rle.push(128u8 + len as u8);
rle.push(c);
}
Norun(idx, len) => {
assert!(len <= 128);
rle.push(len as u8);
rle.extend_from_slice(&data[idx..idx + len]);
}
}
}
}
fn write_rgbe8<W: Write>(w: &mut W, v: Rgbe8Pixel) -> Result<()> {
w.write_all(&[v.c[0], v.c[1], v.c[2], v.e])
}
/// Converts ```Rgb<f32>``` into ```Rgbe8Pixel```
pub fn to_rgbe8(pix: Rgb<f32>) -> Rgbe8Pixel {
let pix = pix.0;
let mx = f32::max(pix[0], f32::max(pix[1], pix[2]));
if mx <= 0.0 {
Rgbe8Pixel { c: [0, 0, 0], e: 0 }
} else {
// let (frac, exp) = mx.frexp(); // unstable yet
let exp = mx.log2().floor() as i32 + 1;
let mul = f32::powi(2.0, exp);
let mut conv = [0u8; 3];
for (cv, &sv) in conv.iter_mut().zip(pix.iter()) {
*cv = f32::trunc(sv / mul * 256.0) as u8;
}
Rgbe8Pixel {
c: conv,
e: (exp + 128) as u8,
}
}
}
#[test]
fn to_rgbe8_test() {
use crate::codecs::hdr::rgbe8;
let test_cases = vec![rgbe8(0, 0, 0, 0), rgbe8(1, 1, 128, 128)];
for &pix in &test_cases {
assert_eq!(pix, to_rgbe8(pix.to_hdr()));
}
for mc in 128..255 {
// TODO: use inclusive range when stable
let pix = rgbe8(mc, mc, mc, 100);
assert_eq!(pix, to_rgbe8(pix.to_hdr()));
let pix = rgbe8(mc, 0, mc, 130);
assert_eq!(pix, to_rgbe8(pix.to_hdr()));
let pix = rgbe8(0, 0, mc, 140);
assert_eq!(pix, to_rgbe8(pix.to_hdr()));
let pix = rgbe8(1, 0, mc, 150);
assert_eq!(pix, to_rgbe8(pix.to_hdr()));
let pix = rgbe8(1, mc, 10, 128);
assert_eq!(pix, to_rgbe8(pix.to_hdr()));
for c in 0..255 {
// Radiance HDR seems to be pre IEEE 754.
// exponent can be -128 (represented as 0u8), so some colors cannot be represented in normalized f32
// Let's exclude exponent value of -128 (0u8) from testing
let pix = rgbe8(1, mc, c, if c == 0 { 1 } else { c });
assert_eq!(pix, to_rgbe8(pix.to_hdr()));
}
}
fn relative_dist(a: Rgb<f32>, b: Rgb<f32>) -> f32 {
// maximal difference divided by maximal value
let max_diff =
a.0.iter()
.zip(b.0.iter())
.fold(0.0, |diff, (&a, &b)| f32::max(diff, (a - b).abs()));
let max_val =
a.0.iter()
.chain(b.0.iter())
.fold(0.0, |maxv, &a| f32::max(maxv, a));
if max_val == 0.0 {
0.0
} else {
max_diff / max_val
}
}
let test_values = vec![
0.000_001, 0.000_02, 0.000_3, 0.004, 0.05, 0.6, 7.0, 80.0, 900.0, 1_000.0, 20_000.0,
300_000.0,
];
for &r in &test_values {
for &g in &test_values {
for &b in &test_values {
let c1 = Rgb([r, g, b]);
let c2 = to_rgbe8(c1).to_hdr();
let rel_dist = relative_dist(c1, c2);
// Maximal value is normalized to the range 128..256, thus we have 1/128 precision
assert!(
rel_dist <= 1.0 / 128.0,
"Relative distance ({}) exceeds 1/128 for {:?} and {:?}",
rel_dist,
c1,
c2
);
}
}
}
}
#[test]
fn runiterator_test() {
let data = [];
let mut run_iter = RunIterator::new(&data[..]);
assert_eq!(run_iter.next(), None);
let data = [5];
let mut run_iter = RunIterator::new(&data[..]);
assert_eq!(run_iter.next(), Some(Norun(0, 1)));
assert_eq!(run_iter.next(), None);
let data = [1, 1];
let mut run_iter = RunIterator::new(&data[..]);
assert_eq!(run_iter.next(), Some(Norun(0, 2)));
assert_eq!(run_iter.next(), None);
let data = [0, 0, 0];
let mut run_iter = RunIterator::new(&data[..]);
assert_eq!(run_iter.next(), Some(Run(0u8, 3)));
assert_eq!(run_iter.next(), None);
let data = [0, 0, 1, 1];
let mut run_iter = RunIterator::new(&data[..]);
assert_eq!(run_iter.next(), Some(Norun(0, 2)));
assert_eq!(run_iter.next(), Some(Norun(2, 2)));
assert_eq!(run_iter.next(), None);
let data = [0, 0, 0, 1, 1];
let mut run_iter = RunIterator::new(&data[..]);
assert_eq!(run_iter.next(), Some(Run(0u8, 3)));
assert_eq!(run_iter.next(), Some(Norun(3, 2)));
assert_eq!(run_iter.next(), None);
let data = [1, 2, 2, 2];
let mut run_iter = RunIterator::new(&data[..]);
assert_eq!(run_iter.next(), Some(Norun(0, 1)));
assert_eq!(run_iter.next(), Some(Run(2u8, 3)));
assert_eq!(run_iter.next(), None);
let data = [1, 1, 2, 2, 2];
let mut run_iter = RunIterator::new(&data[..]);
assert_eq!(run_iter.next(), Some(Norun(0, 2)));
assert_eq!(run_iter.next(), Some(Run(2u8, 3)));
assert_eq!(run_iter.next(), None);
let data = [2; 128];
let mut run_iter = RunIterator::new(&data[..]);
assert_eq!(run_iter.next(), Some(Run(2u8, 127)));
assert_eq!(run_iter.next(), Some(Norun(127, 1)));
assert_eq!(run_iter.next(), None);
let data = [2; 129];
let mut run_iter = RunIterator::new(&data[..]);
assert_eq!(run_iter.next(), Some(Run(2u8, 127)));
assert_eq!(run_iter.next(), Some(Norun(127, 2)));
assert_eq!(run_iter.next(), None);
let data = [2; 130];
let mut run_iter = RunIterator::new(&data[..]);
assert_eq!(run_iter.next(), Some(Run(2u8, 127)));
assert_eq!(run_iter.next(), Some(Run(2u8, 3)));
assert_eq!(run_iter.next(), None);
}
#[test]
fn noruncombine_test() {
fn a<T>(mut v: Vec<T>, mut other: Vec<T>) -> Vec<T> {
v.append(&mut other);
v
}
let v = vec![];
let mut rsi = NorunCombineIterator::new(&v[..]);
assert_eq!(rsi.next(), None);
let v = vec![1];
let mut rsi = NorunCombineIterator::new(&v[..]);
assert_eq!(rsi.next(), Some(Norun(0, 1)));
assert_eq!(rsi.next(), None);
let v = vec![2, 2];
let mut rsi = NorunCombineIterator::new(&v[..]);
assert_eq!(rsi.next(), Some(Norun(0, 2)));
assert_eq!(rsi.next(), None);
let v = vec![3, 3, 3];
let mut rsi = NorunCombineIterator::new(&v[..]);
assert_eq!(rsi.next(), Some(Run(3, 3)));
assert_eq!(rsi.next(), None);
let v = vec![4, 4, 3, 3, 3];
let mut rsi = NorunCombineIterator::new(&v[..]);
assert_eq!(rsi.next(), Some(Norun(0, 2)));
assert_eq!(rsi.next(), Some(Run(3, 3)));
assert_eq!(rsi.next(), None);
let v = vec![40; 400];
let mut rsi = NorunCombineIterator::new(&v[..]);
assert_eq!(rsi.next(), Some(Run(40, 127)));
assert_eq!(rsi.next(), Some(Run(40, 127)));
assert_eq!(rsi.next(), Some(Run(40, 127)));
assert_eq!(rsi.next(), Some(Run(40, 19)));
assert_eq!(rsi.next(), None);
let v = a(a(vec![5; 3], vec![6; 129]), vec![7, 3, 7, 10, 255]);
let mut rsi = NorunCombineIterator::new(&v[..]);
assert_eq!(rsi.next(), Some(Run(5, 3)));
assert_eq!(rsi.next(), Some(Run(6, 127)));
assert_eq!(rsi.next(), Some(Norun(130, 7)));
assert_eq!(rsi.next(), None);
let v = a(a(vec![5; 2], vec![6; 129]), vec![7, 3, 7, 7, 255]);
let mut rsi = NorunCombineIterator::new(&v[..]);
assert_eq!(rsi.next(), Some(Norun(0, 2)));
assert_eq!(rsi.next(), Some(Run(6, 127)));
assert_eq!(rsi.next(), Some(Norun(129, 7)));
assert_eq!(rsi.next(), None);
let v: Vec<_> = ::std::iter::repeat(())
.flat_map(|_| (0..2))
.take(257)
.collect();
let mut rsi = NorunCombineIterator::new(&v[..]);
assert_eq!(rsi.next(), Some(Norun(0, 128)));
assert_eq!(rsi.next(), Some(Norun(128, 128)));
assert_eq!(rsi.next(), Some(Norun(256, 1)));
assert_eq!(rsi.next(), None);
}

15
vendor/image/src/codecs/hdr/mod.rs vendored Normal file
View File

@@ -0,0 +1,15 @@
//! Decoding of Radiance HDR Images
//!
//! A decoder for Radiance HDR images
//!
//! # Related Links
//!
//! * <http://radsite.lbl.gov/radiance/refer/filefmts.pdf>
//! * <http://www.graphics.cornell.edu/~bjw/rgbe/rgbe.c>
//!
mod decoder;
mod encoder;
pub use self::decoder::*;
pub use self::encoder::*;

470
vendor/image/src/codecs/ico/decoder.rs vendored Normal file
View File

@@ -0,0 +1,470 @@
use byteorder::{LittleEndian, ReadBytesExt};
use std::convert::TryFrom;
use std::io::{self, Cursor, Read, Seek, SeekFrom};
use std::marker::PhantomData;
use std::{error, fmt, mem};
use crate::color::ColorType;
use crate::error::{
DecodingError, ImageError, ImageResult, UnsupportedError, UnsupportedErrorKind,
};
use crate::image::{self, ImageDecoder, ImageFormat};
use self::InnerDecoder::*;
use crate::codecs::bmp::BmpDecoder;
use crate::codecs::png::{PngDecoder, PNG_SIGNATURE};
/// Errors that can occur during decoding and parsing an ICO image or one of its enclosed images.
#[derive(Debug, Copy, Clone, Hash, PartialEq, Eq, PartialOrd, Ord)]
enum DecoderError {
/// The ICO directory is empty
NoEntries,
/// The number of color planes (0 or 1), or the horizontal coordinate of the hotspot for CUR files too big.
IcoEntryTooManyPlanesOrHotspot,
/// The bit depth (may be 0 meaning unspecified), or the vertical coordinate of the hotspot for CUR files too big.
IcoEntryTooManyBitsPerPixelOrHotspot,
/// The entry is in PNG format and specified a length that is shorter than PNG header.
PngShorterThanHeader,
/// The enclosed PNG is not in RGBA, which is invalid: https://blogs.msdn.microsoft.com/oldnewthing/20101022-00/?p=12473/.
PngNotRgba,
/// The entry is in BMP format and specified a data size that is not correct for the image and optional mask data.
InvalidDataSize,
/// The dimensions specified by the entry does not match the dimensions in the header of the enclosed image.
ImageEntryDimensionMismatch {
/// The mismatched subimage's type
format: IcoEntryImageFormat,
/// The dimensions specified by the entry
entry: (u16, u16),
/// The dimensions of the image itself
image: (u32, u32),
},
}
impl fmt::Display for DecoderError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
DecoderError::NoEntries => f.write_str("ICO directory contains no image"),
DecoderError::IcoEntryTooManyPlanesOrHotspot => {
f.write_str("ICO image entry has too many color planes or too large hotspot value")
}
DecoderError::IcoEntryTooManyBitsPerPixelOrHotspot => f.write_str(
"ICO image entry has too many bits per pixel or too large hotspot value",
),
DecoderError::PngShorterThanHeader => {
f.write_str("Entry specified a length that is shorter than PNG header!")
}
DecoderError::PngNotRgba => f.write_str("The PNG is not in RGBA format!"),
DecoderError::InvalidDataSize => {
f.write_str("ICO image data size did not match expected size")
}
DecoderError::ImageEntryDimensionMismatch {
format,
entry,
image,
} => f.write_fmt(format_args!(
"Entry{:?} and {}{:?} dimensions do not match!",
entry, format, image
)),
}
}
}
impl From<DecoderError> for ImageError {
fn from(e: DecoderError) -> ImageError {
ImageError::Decoding(DecodingError::new(ImageFormat::Ico.into(), e))
}
}
impl error::Error for DecoderError {}
/// The image formats an ICO may contain
#[derive(Debug, Copy, Clone, Hash, PartialEq, Eq, PartialOrd, Ord)]
enum IcoEntryImageFormat {
/// PNG in ARGB
Png,
/// BMP with optional alpha mask
Bmp,
}
impl fmt::Display for IcoEntryImageFormat {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.write_str(match self {
IcoEntryImageFormat::Png => "PNG",
IcoEntryImageFormat::Bmp => "BMP",
})
}
}
impl From<IcoEntryImageFormat> for ImageFormat {
fn from(val: IcoEntryImageFormat) -> Self {
match val {
IcoEntryImageFormat::Png => ImageFormat::Png,
IcoEntryImageFormat::Bmp => ImageFormat::Bmp,
}
}
}
/// An ico decoder
pub struct IcoDecoder<R: Read> {
selected_entry: DirEntry,
inner_decoder: InnerDecoder<R>,
}
enum InnerDecoder<R: Read> {
Bmp(BmpDecoder<R>),
Png(Box<PngDecoder<R>>),
}
#[derive(Clone, Copy, Default)]
struct DirEntry {
width: u8,
height: u8,
// We ignore some header fields as they will be replicated in the PNG, BMP and they are not
// necessary for determining the best_entry.
#[allow(unused)]
color_count: u8,
// Wikipedia has this to say:
// Although Microsoft's technical documentation states that this value must be zero, the icon
// encoder built into .NET (System.Drawing.Icon.Save) sets this value to 255. It appears that
// the operating system ignores this value altogether.
#[allow(unused)]
reserved: u8,
// We ignore some header fields as they will be replicated in the PNG, BMP and they are not
// necessary for determining the best_entry.
#[allow(unused)]
num_color_planes: u16,
bits_per_pixel: u16,
image_length: u32,
image_offset: u32,
}
impl<R: Read + Seek> IcoDecoder<R> {
/// Create a new decoder that decodes from the stream ```r```
pub fn new(mut r: R) -> ImageResult<IcoDecoder<R>> {
let entries = read_entries(&mut r)?;
let entry = best_entry(entries)?;
let decoder = entry.decoder(r)?;
Ok(IcoDecoder {
selected_entry: entry,
inner_decoder: decoder,
})
}
}
fn read_entries<R: Read>(r: &mut R) -> ImageResult<Vec<DirEntry>> {
let _reserved = r.read_u16::<LittleEndian>()?;
let _type = r.read_u16::<LittleEndian>()?;
let count = r.read_u16::<LittleEndian>()?;
(0..count).map(|_| read_entry(r)).collect()
}
fn read_entry<R: Read>(r: &mut R) -> ImageResult<DirEntry> {
Ok(DirEntry {
width: r.read_u8()?,
height: r.read_u8()?,
color_count: r.read_u8()?,
reserved: r.read_u8()?,
num_color_planes: {
// This may be either the number of color planes (0 or 1), or the horizontal coordinate
// of the hotspot for CUR files.
let num = r.read_u16::<LittleEndian>()?;
if num > 256 {
return Err(DecoderError::IcoEntryTooManyPlanesOrHotspot.into());
}
num
},
bits_per_pixel: {
// This may be either the bit depth (may be 0 meaning unspecified),
// or the vertical coordinate of the hotspot for CUR files.
let num = r.read_u16::<LittleEndian>()?;
if num > 256 {
return Err(DecoderError::IcoEntryTooManyBitsPerPixelOrHotspot.into());
}
num
},
image_length: r.read_u32::<LittleEndian>()?,
image_offset: r.read_u32::<LittleEndian>()?,
})
}
/// Find the entry with the highest (color depth, size).
fn best_entry(mut entries: Vec<DirEntry>) -> ImageResult<DirEntry> {
let mut best = entries.pop().ok_or(DecoderError::NoEntries)?;
let mut best_score = (
best.bits_per_pixel,
u32::from(best.real_width()) * u32::from(best.real_height()),
);
for entry in entries {
let score = (
entry.bits_per_pixel,
u32::from(entry.real_width()) * u32::from(entry.real_height()),
);
if score > best_score {
best = entry;
best_score = score;
}
}
Ok(best)
}
impl DirEntry {
fn real_width(&self) -> u16 {
match self.width {
0 => 256,
w => u16::from(w),
}
}
fn real_height(&self) -> u16 {
match self.height {
0 => 256,
h => u16::from(h),
}
}
fn matches_dimensions(&self, width: u32, height: u32) -> bool {
u32::from(self.real_width()) == width.min(256)
&& u32::from(self.real_height()) == height.min(256)
}
fn seek_to_start<R: Read + Seek>(&self, r: &mut R) -> ImageResult<()> {
r.seek(SeekFrom::Start(u64::from(self.image_offset)))?;
Ok(())
}
fn is_png<R: Read + Seek>(&self, r: &mut R) -> ImageResult<bool> {
self.seek_to_start(r)?;
// Read the first 8 bytes to sniff the image.
let mut signature = [0u8; 8];
r.read_exact(&mut signature)?;
Ok(signature == PNG_SIGNATURE)
}
fn decoder<R: Read + Seek>(&self, mut r: R) -> ImageResult<InnerDecoder<R>> {
let is_png = self.is_png(&mut r)?;
self.seek_to_start(&mut r)?;
if is_png {
Ok(Png(Box::new(PngDecoder::new(r)?)))
} else {
Ok(Bmp(BmpDecoder::new_with_ico_format(r)?))
}
}
}
/// Wrapper struct around a `Cursor<Vec<u8>>`
pub struct IcoReader<R>(Cursor<Vec<u8>>, PhantomData<R>);
impl<R> Read for IcoReader<R> {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
self.0.read(buf)
}
fn read_to_end(&mut self, buf: &mut Vec<u8>) -> io::Result<usize> {
if self.0.position() == 0 && buf.is_empty() {
mem::swap(buf, self.0.get_mut());
Ok(buf.len())
} else {
self.0.read_to_end(buf)
}
}
}
impl<'a, R: 'a + Read + Seek> ImageDecoder<'a> for IcoDecoder<R> {
type Reader = IcoReader<R>;
fn dimensions(&self) -> (u32, u32) {
match self.inner_decoder {
Bmp(ref decoder) => decoder.dimensions(),
Png(ref decoder) => decoder.dimensions(),
}
}
fn color_type(&self) -> ColorType {
match self.inner_decoder {
Bmp(ref decoder) => decoder.color_type(),
Png(ref decoder) => decoder.color_type(),
}
}
fn into_reader(self) -> ImageResult<Self::Reader> {
Ok(IcoReader(
Cursor::new(image::decoder_to_vec(self)?),
PhantomData,
))
}
fn read_image(self, buf: &mut [u8]) -> ImageResult<()> {
assert_eq!(u64::try_from(buf.len()), Ok(self.total_bytes()));
match self.inner_decoder {
Png(decoder) => {
if self.selected_entry.image_length < PNG_SIGNATURE.len() as u32 {
return Err(DecoderError::PngShorterThanHeader.into());
}
// Check if the image dimensions match the ones in the image data.
let (width, height) = decoder.dimensions();
if !self.selected_entry.matches_dimensions(width, height) {
return Err(DecoderError::ImageEntryDimensionMismatch {
format: IcoEntryImageFormat::Png,
entry: (
self.selected_entry.real_width(),
self.selected_entry.real_height(),
),
image: (width, height),
}
.into());
}
// Embedded PNG images can only be of the 32BPP RGBA format.
// https://blogs.msdn.microsoft.com/oldnewthing/20101022-00/?p=12473/
if decoder.color_type() != ColorType::Rgba8 {
return Err(DecoderError::PngNotRgba.into());
}
decoder.read_image(buf)
}
Bmp(mut decoder) => {
let (width, height) = decoder.dimensions();
if !self.selected_entry.matches_dimensions(width, height) {
return Err(DecoderError::ImageEntryDimensionMismatch {
format: IcoEntryImageFormat::Bmp,
entry: (
self.selected_entry.real_width(),
self.selected_entry.real_height(),
),
image: (width, height),
}
.into());
}
// The ICO decoder needs an alpha channel to apply the AND mask.
if decoder.color_type() != ColorType::Rgba8 {
return Err(ImageError::Unsupported(
UnsupportedError::from_format_and_kind(
ImageFormat::Bmp.into(),
UnsupportedErrorKind::Color(decoder.color_type().into()),
),
));
}
decoder.read_image_data(buf)?;
let r = decoder.reader();
let image_end = r.stream_position()?;
let data_end = u64::from(self.selected_entry.image_offset)
+ u64::from(self.selected_entry.image_length);
let mask_row_bytes = ((width + 31) / 32) * 4;
let mask_length = u64::from(mask_row_bytes) * u64::from(height);
// data_end should be image_end + the mask length (mask_row_bytes * height).
// According to
// https://devblogs.microsoft.com/oldnewthing/20101021-00/?p=12483
// the mask is required, but according to Wikipedia
// https://en.wikipedia.org/wiki/ICO_(file_format)
// the mask is not required. Unfortunately, Wikipedia does not have a citation
// for that claim, so we can't be sure which is correct.
if data_end >= image_end + mask_length {
// If there's an AND mask following the image, read and apply it.
for y in 0..height {
let mut x = 0;
for _ in 0..mask_row_bytes {
// Apply the bits of each byte until we reach the end of the row.
let mask_byte = r.read_u8()?;
for bit in (0..8).rev() {
if x >= width {
break;
}
if mask_byte & (1 << bit) != 0 {
// Set alpha channel to transparent.
buf[((height - y - 1) * width + x) as usize * 4 + 3] = 0;
}
x += 1;
}
}
}
Ok(())
} else if data_end == image_end {
// accept images with no mask data
Ok(())
} else {
Err(DecoderError::InvalidDataSize.into())
}
}
}
}
}
#[cfg(test)]
mod test {
use super::*;
// Test if BMP images without alpha channel inside ICOs don't panic.
// Because the test data is invalid decoding should produce an error.
#[test]
fn bmp_16_with_missing_alpha_channel() {
let data = vec![
0x00, 0x00, 0x01, 0x00, 0x01, 0x00, 0x0e, 0x04, 0xc3, 0x7e, 0x00, 0x00, 0x00, 0x00,
0x7c, 0x00, 0x00, 0x00, 0x0e, 0x00, 0x00, 0x00, 0xf8, 0xff, 0xff, 0xff, 0x01, 0x00,
0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x12, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x8f, 0xf6, 0xff, 0xff, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x20, 0x66, 0x74, 0x83, 0x70, 0x61, 0x76, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02,
0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0xff, 0xeb, 0x00, 0x9b, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x4e, 0x47, 0x0d,
0x0a, 0x1a, 0x0a, 0x00, 0x00, 0x00, 0x62, 0x49, 0x48, 0x44, 0x52, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x0c,
0x00, 0x00, 0x00, 0xc3, 0x3f, 0x94, 0x61, 0xaa, 0x17, 0x4d, 0x8d, 0x79, 0x1d, 0x8b,
0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x14, 0x2e, 0x28, 0x40, 0xe5, 0x9f,
0x4b, 0x4d, 0xe9, 0x87, 0xd3, 0xda, 0xd6, 0x89, 0x81, 0xc5, 0xa4, 0xa1, 0x60, 0x98,
0x31, 0xc7, 0x1d, 0xb6, 0x8f, 0x20, 0xc8, 0x3e, 0xee, 0xd8, 0xe4, 0x8f, 0xee, 0x7b,
0x48, 0x9b, 0x88, 0x25, 0x13, 0xda, 0xa4, 0x13, 0xa4, 0x00, 0x00, 0x00, 0x00, 0x40,
0x16, 0x01, 0xff, 0xff, 0xff, 0xff, 0xe9, 0x0c, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0xa3, 0x66, 0x64, 0x41, 0x54, 0xa3, 0xa3, 0x00, 0x00, 0x00, 0xb8, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xa3, 0x66, 0x64, 0x41, 0x54, 0xa3, 0xa3,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x8f, 0xf6, 0xff, 0xff,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x20, 0x66, 0x74, 0x83, 0x70, 0x61, 0x76,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0xff,
0xeb, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x4e, 0x47, 0x0d, 0x0a, 0x1a, 0x0a, 0x00, 0x00, 0x00, 0x62, 0x49,
0x48, 0x44, 0x52, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00,
0x00, 0x00, 0x00, 0xff, 0xff, 0x94, 0xc8, 0x00, 0x02, 0x0c, 0x00, 0xff, 0xff, 0xc6,
0x84, 0x00, 0x2a, 0x75, 0x03, 0xa3, 0x05, 0xfb, 0xe1, 0x6e, 0xe8, 0x27, 0xd6, 0xd3,
0x96, 0xc1, 0xe4, 0x30, 0x0c, 0x05, 0xb9, 0xa3, 0x8b, 0x29, 0xda, 0xa4, 0xf1, 0x4d,
0xf3, 0xb2, 0x98, 0x2b, 0xe6, 0x93, 0x07, 0xf9, 0xca, 0x2b, 0xc2, 0x39, 0x20, 0xba,
0x7c, 0xa0, 0xb1, 0x43, 0xe6, 0xf9, 0xdc, 0xd1, 0xc2, 0x52, 0xdc, 0x41, 0xc1, 0x2f,
0x29, 0xf7, 0x46, 0x32, 0xda, 0x1b, 0x72, 0x8c, 0xe6, 0x2b, 0x01, 0xe5, 0x49, 0x21,
0x89, 0x89, 0xe4, 0x3d, 0xa1, 0xdb, 0x3b, 0x4a, 0x0b, 0x52, 0x86, 0x52, 0x33, 0x9d,
0xb2, 0xcf, 0x4a, 0x86, 0x53, 0xd7, 0xa9, 0x4b, 0xaf, 0x62, 0x06, 0x49, 0x53, 0x00,
0xc3, 0x3f, 0x94, 0x61, 0xaa, 0x17, 0x4d, 0x8d, 0x79, 0x1d, 0x8b, 0x10, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x14, 0x2e, 0x28, 0x40, 0xe5, 0x9f, 0x4b, 0x4d, 0xe9,
0x87, 0xd3, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xe7, 0xc5, 0x00,
0x02, 0x00, 0x00, 0x00, 0x06, 0x00, 0x0b, 0x00, 0x50, 0x31, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x06, 0x76, 0x76, 0x01, 0x00, 0x00, 0x00, 0x76, 0x00,
0x00, 0x23, 0x3f, 0x52, 0x41, 0x44, 0x49, 0x41, 0x4e, 0x43, 0x45, 0x61, 0x50, 0x35,
0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x4d, 0x47, 0x49, 0x46, 0x38, 0x37, 0x61, 0x05,
0x50, 0x37, 0x00, 0x00, 0x00, 0x00, 0x00, 0xc7, 0x37, 0x61,
];
let decoder = IcoDecoder::new(Cursor::new(&data)).unwrap();
let mut buf = vec![0; usize::try_from(decoder.total_bytes()).unwrap()];
assert!(decoder.read_image(&mut buf).is_err());
}
}

194
vendor/image/src/codecs/ico/encoder.rs vendored Normal file
View File

@@ -0,0 +1,194 @@
use byteorder::{LittleEndian, WriteBytesExt};
use std::borrow::Cow;
use std::io::{self, Write};
use crate::color::ColorType;
use crate::error::{ImageError, ImageResult, ParameterError, ParameterErrorKind};
use crate::image::ImageEncoder;
use crate::codecs::png::PngEncoder;
// Enum value indicating an ICO image (as opposed to a CUR image):
const ICO_IMAGE_TYPE: u16 = 1;
// The length of an ICO file ICONDIR structure, in bytes:
const ICO_ICONDIR_SIZE: u32 = 6;
// The length of an ICO file DIRENTRY structure, in bytes:
const ICO_DIRENTRY_SIZE: u32 = 16;
/// ICO encoder
pub struct IcoEncoder<W: Write> {
w: W,
}
/// An ICO image entry
pub struct IcoFrame<'a> {
// Pre-encoded PNG or BMP
encoded_image: Cow<'a, [u8]>,
// Stored as `0 => 256, n => n`
width: u8,
// Stored as `0 => 256, n => n`
height: u8,
color_type: ColorType,
}
impl<'a> IcoFrame<'a> {
/// Construct a new `IcoFrame` using a pre-encoded PNG or BMP
///
/// The `width` and `height` must be between 1 and 256 (inclusive).
pub fn with_encoded(
encoded_image: impl Into<Cow<'a, [u8]>>,
width: u32,
height: u32,
color_type: ColorType,
) -> ImageResult<Self> {
let encoded_image = encoded_image.into();
if !(1..=256).contains(&width) {
return Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::Generic(format!(
"the image width must be `1..=256`, instead width {} was provided",
width,
)),
)));
}
if !(1..=256).contains(&height) {
return Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::Generic(format!(
"the image height must be `1..=256`, instead height {} was provided",
height,
)),
)));
}
Ok(Self {
encoded_image,
width: width as u8,
height: height as u8,
color_type,
})
}
/// Construct a new `IcoFrame` by encoding `buf` as a PNG
///
/// The `width` and `height` must be between 1 and 256 (inclusive)
pub fn as_png(buf: &[u8], width: u32, height: u32, color_type: ColorType) -> ImageResult<Self> {
let mut image_data: Vec<u8> = Vec::new();
PngEncoder::new(&mut image_data).write_image(buf, width, height, color_type)?;
let frame = Self::with_encoded(image_data, width, height, color_type)?;
Ok(frame)
}
}
impl<W: Write> IcoEncoder<W> {
/// Create a new encoder that writes its output to ```w```.
pub fn new(w: W) -> IcoEncoder<W> {
IcoEncoder { w }
}
/// Encodes the image ```image``` that has dimensions ```width``` and
/// ```height``` and ```ColorType``` ```c```. The dimensions of the image
/// must be between 1 and 256 (inclusive) or an error will be returned.
///
/// Expects data to be big endian.
#[deprecated = "Use `IcoEncoder::write_image` instead. Beware that `write_image` has a different endianness convention"]
pub fn encode(self, data: &[u8], width: u32, height: u32, color: ColorType) -> ImageResult<()> {
let mut image_data: Vec<u8> = Vec::new();
#[allow(deprecated)]
PngEncoder::new(&mut image_data).encode(data, width, height, color)?;
let image = IcoFrame::with_encoded(&image_data, width, height, color)?;
self.encode_images(&[image])
}
/// Takes some [`IcoFrame`]s and encodes them into an ICO.
///
/// `images` is a list of images, usually ordered by dimension, which
/// must be between 1 and 65535 (inclusive) in length.
pub fn encode_images(mut self, images: &[IcoFrame<'_>]) -> ImageResult<()> {
if !(1..=usize::from(u16::MAX)).contains(&images.len()) {
return Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::Generic(format!(
"the number of images must be `1..=u16::MAX`, instead {} images were provided",
images.len(),
)),
)));
}
let num_images = images.len() as u16;
let mut offset = ICO_ICONDIR_SIZE + (ICO_DIRENTRY_SIZE * (images.len() as u32));
write_icondir(&mut self.w, num_images)?;
for image in images {
write_direntry(
&mut self.w,
image.width,
image.height,
image.color_type,
offset,
image.encoded_image.len() as u32,
)?;
offset += image.encoded_image.len() as u32;
}
for image in images {
self.w.write_all(&image.encoded_image)?;
}
Ok(())
}
}
impl<W: Write> ImageEncoder for IcoEncoder<W> {
/// Write an ICO image with the specified width, height, and color type.
///
/// For color types with 16-bit per channel or larger, the contents of `buf` should be in
/// native endian.
///
/// WARNING: In image 0.23.14 and earlier this method erroneously expected buf to be in big endian.
fn write_image(
self,
buf: &[u8],
width: u32,
height: u32,
color_type: ColorType,
) -> ImageResult<()> {
let image = IcoFrame::as_png(buf, width, height, color_type)?;
self.encode_images(&[image])
}
}
fn write_icondir<W: Write>(w: &mut W, num_images: u16) -> io::Result<()> {
// Reserved field (must be zero):
w.write_u16::<LittleEndian>(0)?;
// Image type (ICO or CUR):
w.write_u16::<LittleEndian>(ICO_IMAGE_TYPE)?;
// Number of images in the file:
w.write_u16::<LittleEndian>(num_images)?;
Ok(())
}
fn write_direntry<W: Write>(
w: &mut W,
width: u8,
height: u8,
color: ColorType,
data_start: u32,
data_size: u32,
) -> io::Result<()> {
// Image dimensions:
w.write_u8(width)?;
w.write_u8(height)?;
// Number of colors in palette (or zero for no palette):
w.write_u8(0)?;
// Reserved field (must be zero):
w.write_u8(0)?;
// Color planes:
w.write_u16::<LittleEndian>(0)?;
// Bits per pixel:
w.write_u16::<LittleEndian>(color.bits_per_pixel())?;
// Image data size, in bytes:
w.write_u32::<LittleEndian>(data_size)?;
// Image data offset, in bytes:
w.write_u32::<LittleEndian>(data_start)?;
Ok(())
}

14
vendor/image/src/codecs/ico/mod.rs vendored Normal file
View File

@@ -0,0 +1,14 @@
//! Decoding and Encoding of ICO files
//!
//! A decoder and encoder for ICO (Windows Icon) image container files.
//!
//! # Related Links
//! * <https://msdn.microsoft.com/en-us/library/ms997538.aspx>
//! * <https://en.wikipedia.org/wiki/ICO_%28file_format%29>
pub use self::decoder::IcoDecoder;
#[allow(deprecated)]
pub use self::encoder::{IcoEncoder, IcoFrame};
mod decoder;
mod encoder;

1289
vendor/image/src/codecs/jpeg/decoder.rs vendored Normal file

File diff suppressed because it is too large Load Diff

1074
vendor/image/src/codecs/jpeg/encoder.rs vendored Normal file

File diff suppressed because it is too large Load Diff

63
vendor/image/src/codecs/jpeg/entropy.rs vendored Normal file
View File

@@ -0,0 +1,63 @@
/// Given an array containing the number of codes of each code length,
/// this function generates the huffman codes lengths and their respective
/// code lengths as specified by the JPEG spec.
const fn derive_codes_and_sizes(bits: &[u8; 16]) -> ([u8; 256], [u16; 256]) {
let mut huffsize = [0u8; 256];
let mut huffcode = [0u16; 256];
let mut k = 0;
// Annex C.2
// Figure C.1
// Generate table of individual code lengths
let mut i = 0;
while i < 16 {
let mut j = 0;
while j < bits[i as usize] {
huffsize[k] = i + 1;
k += 1;
j += 1;
}
i += 1;
}
huffsize[k] = 0;
// Annex C.2
// Figure C.2
// Generate table of huffman codes
k = 0;
let mut code = 0u16;
let mut size = huffsize[0];
while huffsize[k] != 0 {
huffcode[k] = code;
code += 1;
k += 1;
if huffsize[k] == size {
continue;
}
// FIXME there is something wrong with this code
let diff = huffsize[k].wrapping_sub(size);
code = if diff < 16 { code << diff as usize } else { 0 };
size = size.wrapping_add(diff);
}
(huffsize, huffcode)
}
pub(crate) const fn build_huff_lut_const(bits: &[u8; 16], huffval: &[u8]) -> [(u8, u16); 256] {
let mut lut = [(17u8, 0u16); 256];
let (huffsize, huffcode) = derive_codes_and_sizes(bits);
let mut i = 0;
while i < huffval.len() {
lut[huffval[i] as usize] = (huffsize[i], huffcode[i]);
i += 1;
}
lut
}

16
vendor/image/src/codecs/jpeg/mod.rs vendored Normal file
View File

@@ -0,0 +1,16 @@
//! Decoding and Encoding of JPEG Images
//!
//! JPEG (Joint Photographic Experts Group) is an image format that supports lossy compression.
//! This module implements the Baseline JPEG standard.
//!
//! # Related Links
//! * <http://www.w3.org/Graphics/JPEG/itu-t81.pdf> - The JPEG specification
//!
pub use self::decoder::JpegDecoder;
pub use self::encoder::{JpegEncoder, PixelDensity, PixelDensityUnit};
mod decoder;
mod encoder;
mod entropy;
mod transform;

View File

@@ -0,0 +1,196 @@
/*
fdct is a Rust translation of jfdctint.c from the
Independent JPEG Group's libjpeg version 9a
obtained from http://www.ijg.org/files/jpegsr9a.zip
It comes with the following conditions of distribution and use:
In plain English:
1. We don't promise that this software works. (But if you find any bugs,
please let us know!)
2. You can use this software for whatever you want. You don't have to pay us.
3. You may not pretend that you wrote this software. If you use it in a
program, you must acknowledge somewhere in your documentation that
you've used the IJG code.
In legalese:
The authors make NO WARRANTY or representation, either express or implied,
with respect to this software, its quality, accuracy, merchantability, or
fitness for a particular purpose. This software is provided "AS IS", and you,
its user, assume the entire risk as to its quality and accuracy.
This software is copyright (C) 1991-2014, Thomas G. Lane, Guido Vollbeding.
All Rights Reserved except as specified below.
Permission is hereby granted to use, copy, modify, and distribute this
software (or portions thereof) for any purpose, without fee, subject to these
conditions:
(1) If any part of the source code for this software is distributed, then this
README file must be included, with this copyright and no-warranty notice
unaltered; and any additions, deletions, or changes to the original files
must be clearly indicated in accompanying documentation.
(2) If only executable code is distributed, then the accompanying
documentation must state that "this software is based in part on the work of
the Independent JPEG Group".
(3) Permission for use of this software is granted only if the user accepts
full responsibility for any undesirable consequences; the authors accept
NO LIABILITY for damages of any kind.
These conditions apply to any software derived from or based on the IJG code,
not just to the unmodified library. If you use our work, you ought to
acknowledge us.
Permission is NOT granted for the use of any IJG author's name or company name
in advertising or publicity relating to this software or products derived from
it. This software may be referred to only as "the Independent JPEG Group's
software".
We specifically permit and encourage the use of this software as the basis of
commercial products, provided that all warranty or liability claims are
assumed by the product vendor.
*/
static CONST_BITS: i32 = 13;
static PASS1_BITS: i32 = 2;
static FIX_0_298631336: i32 = 2446;
static FIX_0_390180644: i32 = 3196;
static FIX_0_541196100: i32 = 4433;
static FIX_0_765366865: i32 = 6270;
static FIX_0_899976223: i32 = 7373;
static FIX_1_175875602: i32 = 9633;
static FIX_1_501321110: i32 = 12_299;
static FIX_1_847759065: i32 = 15_137;
static FIX_1_961570560: i32 = 16_069;
static FIX_2_053119869: i32 = 16_819;
static FIX_2_562915447: i32 = 20_995;
static FIX_3_072711026: i32 = 25_172;
pub(crate) fn fdct(samples: &[u8; 64], coeffs: &mut [i32; 64]) {
// Pass 1: process rows.
// Results are scaled by sqrt(8) compared to a true DCT
// furthermore we scale the results by 2**PASS1_BITS
for y in 0usize..8 {
let y0 = y * 8;
// Even part
let t0 = i32::from(samples[y0]) + i32::from(samples[y0 + 7]);
let t1 = i32::from(samples[y0 + 1]) + i32::from(samples[y0 + 6]);
let t2 = i32::from(samples[y0 + 2]) + i32::from(samples[y0 + 5]);
let t3 = i32::from(samples[y0 + 3]) + i32::from(samples[y0 + 4]);
let t10 = t0 + t3;
let t12 = t0 - t3;
let t11 = t1 + t2;
let t13 = t1 - t2;
let t0 = i32::from(samples[y0]) - i32::from(samples[y0 + 7]);
let t1 = i32::from(samples[y0 + 1]) - i32::from(samples[y0 + 6]);
let t2 = i32::from(samples[y0 + 2]) - i32::from(samples[y0 + 5]);
let t3 = i32::from(samples[y0 + 3]) - i32::from(samples[y0 + 4]);
// Apply unsigned -> signed conversion
coeffs[y0] = (t10 + t11 - 8 * 128) << PASS1_BITS as usize;
coeffs[y0 + 4] = (t10 - t11) << PASS1_BITS as usize;
let mut z1 = (t12 + t13) * FIX_0_541196100;
// Add fudge factor here for final descale
z1 += 1 << (CONST_BITS - PASS1_BITS - 1) as usize;
coeffs[y0 + 2] = (z1 + t12 * FIX_0_765366865) >> (CONST_BITS - PASS1_BITS) as usize;
coeffs[y0 + 6] = (z1 - t13 * FIX_1_847759065) >> (CONST_BITS - PASS1_BITS) as usize;
// Odd part
let t12 = t0 + t2;
let t13 = t1 + t3;
let mut z1 = (t12 + t13) * FIX_1_175875602;
// Add fudge factor here for final descale
z1 += 1 << (CONST_BITS - PASS1_BITS - 1) as usize;
let mut t12 = t12 * (-FIX_0_390180644);
let mut t13 = t13 * (-FIX_1_961570560);
t12 += z1;
t13 += z1;
let z1 = (t0 + t3) * (-FIX_0_899976223);
let mut t0 = t0 * FIX_1_501321110;
let mut t3 = t3 * FIX_0_298631336;
t0 += z1 + t12;
t3 += z1 + t13;
let z1 = (t1 + t2) * (-FIX_2_562915447);
let mut t1 = t1 * FIX_3_072711026;
let mut t2 = t2 * FIX_2_053119869;
t1 += z1 + t13;
t2 += z1 + t12;
coeffs[y0 + 1] = t0 >> (CONST_BITS - PASS1_BITS) as usize;
coeffs[y0 + 3] = t1 >> (CONST_BITS - PASS1_BITS) as usize;
coeffs[y0 + 5] = t2 >> (CONST_BITS - PASS1_BITS) as usize;
coeffs[y0 + 7] = t3 >> (CONST_BITS - PASS1_BITS) as usize;
}
// Pass 2: process columns
// We remove the PASS1_BITS scaling but leave the results scaled up an
// overall factor of 8
for x in (0usize..8).rev() {
// Even part
let t0 = coeffs[x] + coeffs[x + 8 * 7];
let t1 = coeffs[x + 8] + coeffs[x + 8 * 6];
let t2 = coeffs[x + 8 * 2] + coeffs[x + 8 * 5];
let t3 = coeffs[x + 8 * 3] + coeffs[x + 8 * 4];
// Add fudge factor here for final descale
let t10 = t0 + t3 + (1 << (PASS1_BITS - 1) as usize);
let t12 = t0 - t3;
let t11 = t1 + t2;
let t13 = t1 - t2;
let t0 = coeffs[x] - coeffs[x + 8 * 7];
let t1 = coeffs[x + 8] - coeffs[x + 8 * 6];
let t2 = coeffs[x + 8 * 2] - coeffs[x + 8 * 5];
let t3 = coeffs[x + 8 * 3] - coeffs[x + 8 * 4];
coeffs[x] = (t10 + t11) >> PASS1_BITS as usize;
coeffs[x + 8 * 4] = (t10 - t11) >> PASS1_BITS as usize;
let mut z1 = (t12 + t13) * FIX_0_541196100;
// Add fudge factor here for final descale
z1 += 1 << (CONST_BITS + PASS1_BITS - 1) as usize;
coeffs[x + 8 * 2] = (z1 + t12 * FIX_0_765366865) >> (CONST_BITS + PASS1_BITS) as usize;
coeffs[x + 8 * 6] = (z1 - t13 * FIX_1_847759065) >> (CONST_BITS + PASS1_BITS) as usize;
// Odd part
let t12 = t0 + t2;
let t13 = t1 + t3;
let mut z1 = (t12 + t13) * FIX_1_175875602;
// Add fudge factor here for final descale
z1 += 1 << (CONST_BITS - PASS1_BITS - 1) as usize;
let mut t12 = t12 * (-FIX_0_390180644);
let mut t13 = t13 * (-FIX_1_961570560);
t12 += z1;
t13 += z1;
let z1 = (t0 + t3) * (-FIX_0_899976223);
let mut t0 = t0 * FIX_1_501321110;
let mut t3 = t3 * FIX_0_298631336;
t0 += z1 + t12;
t3 += z1 + t13;
let z1 = (t1 + t2) * (-FIX_2_562915447);
let mut t1 = t1 * FIX_3_072711026;
let mut t2 = t2 * FIX_2_053119869;
t1 += z1 + t13;
t2 += z1 + t12;
coeffs[x + 8] = t0 >> (CONST_BITS + PASS1_BITS) as usize;
coeffs[x + 8 * 3] = t1 >> (CONST_BITS + PASS1_BITS) as usize;
coeffs[x + 8 * 5] = t2 >> (CONST_BITS + PASS1_BITS) as usize;
coeffs[x + 8 * 7] = t3 >> (CONST_BITS + PASS1_BITS) as usize;
}
}

592
vendor/image/src/codecs/openexr.rs vendored Normal file
View File

@@ -0,0 +1,592 @@
//! Decoding of OpenEXR (.exr) Images
//!
//! OpenEXR is an image format that is widely used, especially in VFX,
//! because it supports lossless and lossy compression for float data.
//!
//! This decoder only supports RGB and RGBA images.
//! If an image does not contain alpha information,
//! it is defaulted to `1.0` (no transparency).
//!
//! # Related Links
//! * <https://www.openexr.com/documentation.html> - The OpenEXR reference.
//!
//!
//! Current limitations (July 2021):
//! - only pixel type `Rgba32F` and `Rgba16F` are supported
//! - only non-deep rgb/rgba files supported, no conversion from/to YCbCr or similar
//! - only the first non-deep rgb layer is used
//! - only the largest mip map level is used
//! - pixels outside display window are lost
//! - meta data is lost
//! - dwaa/dwab compressed images not supported yet by the exr library
//! - (chroma) subsampling not supported yet by the exr library
use exr::prelude::*;
use crate::error::{DecodingError, EncodingError, ImageFormatHint};
use crate::image::decoder_to_vec;
use crate::{
ColorType, ExtendedColorType, ImageDecoder, ImageEncoder, ImageError, ImageFormat, ImageResult,
Progress,
};
use std::convert::TryInto;
use std::io::{Cursor, Read, Seek, Write};
/// An OpenEXR decoder. Immediately reads the meta data from the file.
#[derive(Debug)]
pub struct OpenExrDecoder<R> {
exr_reader: exr::block::reader::Reader<R>,
// select a header that is rgb and not deep
header_index: usize,
// decode either rgb or rgba.
// can be specified to include or discard alpha channels.
// if none, the alpha channel will only be allocated where the file contains data for it.
alpha_preference: Option<bool>,
alpha_present_in_file: bool,
}
impl<R: Read + Seek> OpenExrDecoder<R> {
/// Create a decoder. Consumes the first few bytes of the source to extract image dimensions.
/// Assumes the reader is buffered. In most cases,
/// you should wrap your reader in a `BufReader` for best performance.
/// Loads an alpha channel if the file has alpha samples.
/// Use `with_alpha_preference` if you want to load or not load alpha unconditionally.
pub fn new(source: R) -> ImageResult<Self> {
Self::with_alpha_preference(source, None)
}
/// Create a decoder. Consumes the first few bytes of the source to extract image dimensions.
/// Assumes the reader is buffered. In most cases,
/// you should wrap your reader in a `BufReader` for best performance.
/// If alpha preference is specified, an alpha channel will
/// always be present or always be not present in the returned image.
/// If alpha preference is none, the alpha channel will only be returned if it is found in the file.
pub fn with_alpha_preference(source: R, alpha_preference: Option<bool>) -> ImageResult<Self> {
// read meta data, then wait for further instructions, keeping the file open and ready
let exr_reader = exr::block::read(source, false).map_err(to_image_err)?;
let header_index = exr_reader
.headers()
.iter()
.position(|header| {
// check if r/g/b exists in the channels
let has_rgb = ["R", "G", "B"]
.iter()
.all(|&required| // alpha will be optional
header.channels.find_index_of_channel(&Text::from(required)).is_some());
// we currently dont support deep images, or images with other color spaces than rgb
!header.deep && has_rgb
})
.ok_or_else(|| {
ImageError::Decoding(DecodingError::new(
ImageFormatHint::Exact(ImageFormat::OpenExr),
"image does not contain non-deep rgb channels",
))
})?;
let has_alpha = exr_reader.headers()[header_index]
.channels
.find_index_of_channel(&Text::from("A"))
.is_some();
Ok(Self {
alpha_preference,
exr_reader,
header_index,
alpha_present_in_file: has_alpha,
})
}
// does not leak exrs-specific meta data into public api, just does it for this module
fn selected_exr_header(&self) -> &exr::meta::header::Header {
&self.exr_reader.meta_data().headers[self.header_index]
}
}
impl<'a, R: 'a + Read + Seek> ImageDecoder<'a> for OpenExrDecoder<R> {
type Reader = Cursor<Vec<u8>>;
fn dimensions(&self) -> (u32, u32) {
let size = self
.selected_exr_header()
.shared_attributes
.display_window
.size;
(size.width() as u32, size.height() as u32)
}
fn color_type(&self) -> ColorType {
let returns_alpha = self.alpha_preference.unwrap_or(self.alpha_present_in_file);
if returns_alpha {
ColorType::Rgba32F
} else {
ColorType::Rgb32F
}
}
fn original_color_type(&self) -> ExtendedColorType {
if self.alpha_present_in_file {
ExtendedColorType::Rgba32F
} else {
ExtendedColorType::Rgb32F
}
}
/// Use `read_image` instead if possible,
/// as this method creates a whole new buffer just to contain the entire image.
fn into_reader(self) -> ImageResult<Self::Reader> {
Ok(Cursor::new(decoder_to_vec(self)?))
}
fn scanline_bytes(&self) -> u64 {
// we cannot always read individual scan lines for every file,
// as the tiles or lines in the file could be in random or reversed order.
// therefore we currently read all lines at once
// Todo: optimize for specific exr.line_order?
self.total_bytes()
}
// reads with or without alpha, depending on `self.alpha_preference` and `self.alpha_present_in_file`
fn read_image_with_progress<F: Fn(Progress)>(
self,
unaligned_bytes: &mut [u8],
progress_callback: F,
) -> ImageResult<()> {
let blocks_in_header = self.selected_exr_header().chunk_count as u64;
let channel_count = self.color_type().channel_count() as usize;
let display_window = self.selected_exr_header().shared_attributes.display_window;
let data_window_offset =
self.selected_exr_header().own_attributes.layer_position - display_window.position;
{
// check whether the buffer is large enough for the dimensions of the file
let (width, height) = self.dimensions();
let bytes_per_pixel = self.color_type().bytes_per_pixel() as usize;
let expected_byte_count = (width as usize)
.checked_mul(height as usize)
.and_then(|size| size.checked_mul(bytes_per_pixel));
// if the width and height does not match the length of the bytes, the arguments are invalid
let has_invalid_size_or_overflowed = expected_byte_count
.map(|expected_byte_count| unaligned_bytes.len() != expected_byte_count)
// otherwise, size calculation overflowed, is bigger than memory,
// therefore data is too small, so it is invalid.
.unwrap_or(true);
if has_invalid_size_or_overflowed {
panic!("byte buffer not large enough for the specified dimensions and f32 pixels");
}
}
let result = read()
.no_deep_data()
.largest_resolution_level()
.rgba_channels(
move |_size, _channels| vec![0_f32; display_window.size.area() * channel_count],
move |buffer, index_in_data_window, (r, g, b, a_or_1): (f32, f32, f32, f32)| {
let index_in_display_window =
index_in_data_window.to_i32() + data_window_offset;
// only keep pixels inside the data window
// TODO filter chunks based on this
if index_in_display_window.x() >= 0
&& index_in_display_window.y() >= 0
&& index_in_display_window.x() < display_window.size.width() as i32
&& index_in_display_window.y() < display_window.size.height() as i32
{
let index_in_display_window =
index_in_display_window.to_usize("index bug").unwrap();
let first_f32_index =
index_in_display_window.flat_index_for_size(display_window.size);
buffer[first_f32_index * channel_count
..(first_f32_index + 1) * channel_count]
.copy_from_slice(&[r, g, b, a_or_1][0..channel_count]);
// TODO white point chromaticities + srgb/linear conversion?
}
},
)
.first_valid_layer() // TODO select exact layer by self.header_index?
.all_attributes()
.on_progress(|progress| {
progress_callback(
Progress::new(
(progress * blocks_in_header as f64) as u64,
blocks_in_header,
), // TODO precision errors?
);
})
.from_chunks(self.exr_reader)
.map_err(to_image_err)?;
// TODO this copy is strictly not necessary, but the exr api is a little too simple for reading into a borrowed target slice
// this cast is safe and works with any alignment, as bytes are copied, and not f32 values.
// note: buffer slice length is checked in the beginning of this function and will be correct at this point
unaligned_bytes.copy_from_slice(bytemuck::cast_slice(
result.layer_data.channel_data.pixels.as_slice(),
));
Ok(())
}
}
/// Write a raw byte buffer of pixels,
/// returning an Error if it has an invalid length.
///
/// Assumes the writer is buffered. In most cases,
/// you should wrap your writer in a `BufWriter` for best performance.
// private. access via `OpenExrEncoder`
fn write_buffer(
mut buffered_write: impl Write + Seek,
unaligned_bytes: &[u8],
width: u32,
height: u32,
color_type: ColorType,
) -> ImageResult<()> {
let width = width as usize;
let height = height as usize;
{
// check whether the buffer is large enough for the specified dimensions
let expected_byte_count = width
.checked_mul(height)
.and_then(|size| size.checked_mul(color_type.bytes_per_pixel() as usize));
// if the width and height does not match the length of the bytes, the arguments are invalid
let has_invalid_size_or_overflowed = expected_byte_count
.map(|expected_byte_count| unaligned_bytes.len() < expected_byte_count)
// otherwise, size calculation overflowed, is bigger than memory,
// therefore data is too small, so it is invalid.
.unwrap_or(true);
if has_invalid_size_or_overflowed {
return Err(ImageError::Encoding(EncodingError::new(
ImageFormatHint::Exact(ImageFormat::OpenExr),
"byte buffer not large enough for the specified dimensions and f32 pixels",
)));
}
}
// bytes might be unaligned so we cannot cast the whole thing, instead lookup each f32 individually
let lookup_f32 = move |f32_index: usize| {
let unaligned_f32_bytes_slice = &unaligned_bytes[f32_index * 4..(f32_index + 1) * 4];
let f32_bytes_array = unaligned_f32_bytes_slice
.try_into()
.expect("indexing error");
f32::from_ne_bytes(f32_bytes_array)
};
match color_type {
ColorType::Rgb32F => {
exr::prelude::Image // TODO compression method zip??
::from_channels(
(width, height),
SpecificChannels::rgb(|pixel: Vec2<usize>| {
let pixel_index = 3 * pixel.flat_index_for_size(Vec2(width, height));
(
lookup_f32(pixel_index),
lookup_f32(pixel_index + 1),
lookup_f32(pixel_index + 2),
)
}),
)
.write()
// .on_progress(|progress| todo!())
.to_buffered(&mut buffered_write)
.map_err(to_image_err)?;
}
ColorType::Rgba32F => {
exr::prelude::Image // TODO compression method zip??
::from_channels(
(width, height),
SpecificChannels::rgba(|pixel: Vec2<usize>| {
let pixel_index = 4 * pixel.flat_index_for_size(Vec2(width, height));
(
lookup_f32(pixel_index),
lookup_f32(pixel_index + 1),
lookup_f32(pixel_index + 2),
lookup_f32(pixel_index + 3),
)
}),
)
.write()
// .on_progress(|progress| todo!())
.to_buffered(&mut buffered_write)
.map_err(to_image_err)?;
}
// TODO other color types and channel types
unsupported_color_type => {
return Err(ImageError::Encoding(EncodingError::new(
ImageFormatHint::Exact(ImageFormat::OpenExr),
format!(
"writing color type {:?} not yet supported",
unsupported_color_type
),
)))
}
}
Ok(())
}
// TODO is this struct and trait actually used anywhere?
/// A thin wrapper that implements `ImageEncoder` for OpenEXR images. Will behave like `image::codecs::openexr::write_buffer`.
#[derive(Debug)]
pub struct OpenExrEncoder<W>(W);
impl<W> OpenExrEncoder<W> {
/// Create an `ImageEncoder`. Does not write anything yet. Writing later will behave like `image::codecs::openexr::write_buffer`.
// use constructor, not public field, for future backwards-compatibility
pub fn new(write: W) -> Self {
Self(write)
}
}
impl<W> ImageEncoder for OpenExrEncoder<W>
where
W: Write + Seek,
{
/// Writes the complete image.
///
/// Returns an Error if it has an invalid length.
/// Assumes the writer is buffered. In most cases,
/// you should wrap your writer in a `BufWriter` for best performance.
fn write_image(
self,
buf: &[u8],
width: u32,
height: u32,
color_type: ColorType,
) -> ImageResult<()> {
write_buffer(self.0, buf, width, height, color_type)
}
}
fn to_image_err(exr_error: Error) -> ImageError {
ImageError::Decoding(DecodingError::new(
ImageFormatHint::Exact(ImageFormat::OpenExr),
exr_error.to_string(),
))
}
#[cfg(test)]
mod test {
use super::*;
use std::io::BufReader;
use std::path::{Path, PathBuf};
use crate::buffer_::{Rgb32FImage, Rgba32FImage};
use crate::error::{LimitError, LimitErrorKind};
use crate::{ImageBuffer, Rgb, Rgba};
const BASE_PATH: &[&str] = &[".", "tests", "images", "exr"];
/// Write an `Rgb32FImage`.
/// Assumes the writer is buffered. In most cases,
/// you should wrap your writer in a `BufWriter` for best performance.
fn write_rgb_image(write: impl Write + Seek, image: &Rgb32FImage) -> ImageResult<()> {
write_buffer(
write,
bytemuck::cast_slice(image.as_raw().as_slice()),
image.width(),
image.height(),
ColorType::Rgb32F,
)
}
/// Write an `Rgba32FImage`.
/// Assumes the writer is buffered. In most cases,
/// you should wrap your writer in a `BufWriter` for best performance.
fn write_rgba_image(write: impl Write + Seek, image: &Rgba32FImage) -> ImageResult<()> {
write_buffer(
write,
bytemuck::cast_slice(image.as_raw().as_slice()),
image.width(),
image.height(),
ColorType::Rgba32F,
)
}
/// Read the file from the specified path into an `Rgba32FImage`.
fn read_as_rgba_image_from_file(path: impl AsRef<Path>) -> ImageResult<Rgba32FImage> {
read_as_rgba_image(BufReader::new(std::fs::File::open(path)?))
}
/// Read the file from the specified path into an `Rgb32FImage`.
fn read_as_rgb_image_from_file(path: impl AsRef<Path>) -> ImageResult<Rgb32FImage> {
read_as_rgb_image(BufReader::new(std::fs::File::open(path)?))
}
/// Read the file from the specified path into an `Rgb32FImage`.
fn read_as_rgb_image(read: impl Read + Seek) -> ImageResult<Rgb32FImage> {
let decoder = OpenExrDecoder::with_alpha_preference(read, Some(false))?;
let (width, height) = decoder.dimensions();
let buffer: Vec<f32> = decoder_to_vec(decoder)?;
ImageBuffer::from_raw(width, height, buffer)
// this should be the only reason for the "from raw" call to fail,
// even though such a large allocation would probably cause an error much earlier
.ok_or_else(|| {
ImageError::Limits(LimitError::from_kind(LimitErrorKind::InsufficientMemory))
})
}
/// Read the file from the specified path into an `Rgba32FImage`.
fn read_as_rgba_image(read: impl Read + Seek) -> ImageResult<Rgba32FImage> {
let decoder = OpenExrDecoder::with_alpha_preference(read, Some(true))?;
let (width, height) = decoder.dimensions();
let buffer: Vec<f32> = decoder_to_vec(decoder)?;
ImageBuffer::from_raw(width, height, buffer)
// this should be the only reason for the "from raw" call to fail,
// even though such a large allocation would probably cause an error much earlier
.ok_or_else(|| {
ImageError::Limits(LimitError::from_kind(LimitErrorKind::InsufficientMemory))
})
}
#[test]
fn compare_exr_hdr() {
if cfg!(not(feature = "hdr")) {
eprintln!("warning: to run all the openexr tests, activate the hdr feature flag");
}
#[cfg(feature = "hdr")]
{
let folder = BASE_PATH.iter().collect::<PathBuf>();
let reference_path = folder.clone().join("overexposed gradient.hdr");
let exr_path = folder
.clone()
.join("overexposed gradient - data window equals display window.exr");
let hdr: Vec<Rgb<f32>> = crate::codecs::hdr::HdrDecoder::new(std::io::BufReader::new(
std::fs::File::open(&reference_path).unwrap(),
))
.unwrap()
.read_image_hdr()
.unwrap();
let exr_pixels: Rgb32FImage = read_as_rgb_image_from_file(exr_path).unwrap();
assert_eq!(
exr_pixels.dimensions().0 * exr_pixels.dimensions().1,
hdr.len() as u32
);
for (expected, found) in hdr.iter().zip(exr_pixels.pixels()) {
for (expected, found) in expected.0.iter().zip(found.0.iter()) {
// the large tolerance seems to be caused by
// the RGBE u8x4 pixel quantization of the hdr image format
assert!(
(expected - found).abs() < 0.1,
"expected {}, found {}",
expected,
found
);
}
}
}
}
#[test]
fn roundtrip_rgba() {
let mut next_random = vec![1.0, 0.0, -1.0, -3.14, 27.0, 11.0, 31.0]
.into_iter()
.cycle();
let mut next_random = move || next_random.next().unwrap();
let generated_image: Rgba32FImage = ImageBuffer::from_fn(9, 31, |_x, _y| {
Rgba([next_random(), next_random(), next_random(), next_random()])
});
let mut bytes = vec![];
write_rgba_image(Cursor::new(&mut bytes), &generated_image).unwrap();
let decoded_image = read_as_rgba_image(Cursor::new(bytes)).unwrap();
debug_assert_eq!(generated_image, decoded_image);
}
#[test]
fn roundtrip_rgb() {
let mut next_random = vec![1.0, 0.0, -1.0, -3.14, 27.0, 11.0, 31.0]
.into_iter()
.cycle();
let mut next_random = move || next_random.next().unwrap();
let generated_image: Rgb32FImage = ImageBuffer::from_fn(9, 31, |_x, _y| {
Rgb([next_random(), next_random(), next_random()])
});
let mut bytes = vec![];
write_rgb_image(Cursor::new(&mut bytes), &generated_image).unwrap();
let decoded_image = read_as_rgb_image(Cursor::new(bytes)).unwrap();
debug_assert_eq!(generated_image, decoded_image);
}
#[test]
fn compare_rgba_rgb() {
let exr_path = BASE_PATH
.iter()
.collect::<PathBuf>()
.join("overexposed gradient - data window equals display window.exr");
let rgb: Rgb32FImage = read_as_rgb_image_from_file(&exr_path).unwrap();
let rgba: Rgba32FImage = read_as_rgba_image_from_file(&exr_path).unwrap();
assert_eq!(rgba.dimensions(), rgb.dimensions());
for (Rgb(rgb), Rgba(rgba)) in rgb.pixels().zip(rgba.pixels()) {
assert_eq!(rgb, &rgba[..3]);
}
}
#[test]
fn compare_cropped() {
// like in photoshop, exr images may have layers placed anywhere in a canvas.
// we don't want to load the pixels from the layer, but we want to load the pixels from the canvas.
// a layer might be smaller than the canvas, in that case the canvas should be transparent black
// where no layer was covering it. a layer might also be larger than the canvas,
// these pixels should be discarded.
//
// in this test we want to make sure that an
// auto-cropped image will be reproduced to the original.
let exr_path = BASE_PATH.iter().collect::<PathBuf>();
let original = exr_path.clone().join("cropping - uncropped original.exr");
let cropped = exr_path
.clone()
.join("cropping - data window differs display window.exr");
// smoke-check that the exr files are actually not the same
{
let original_exr = read_first_flat_layer_from_file(&original).unwrap();
let cropped_exr = read_first_flat_layer_from_file(&cropped).unwrap();
assert_eq!(
original_exr.attributes.display_window,
cropped_exr.attributes.display_window
);
assert_ne!(
original_exr.layer_data.attributes.layer_position,
cropped_exr.layer_data.attributes.layer_position
);
assert_ne!(original_exr.layer_data.size, cropped_exr.layer_data.size);
}
// check that they result in the same image
let original: Rgba32FImage = read_as_rgba_image_from_file(&original).unwrap();
let cropped: Rgba32FImage = read_as_rgba_image_from_file(&cropped).unwrap();
assert_eq!(original.dimensions(), cropped.dimensions());
// the following is not a simple assert_eq, as in case of an error,
// the whole image would be printed to the console, which takes forever
assert!(original.pixels().zip(cropped.pixels()).all(|(a, b)| a == b));
}
}

778
vendor/image/src/codecs/png.rs vendored Normal file
View File

@@ -0,0 +1,778 @@
//! Decoding and Encoding of PNG Images
//!
//! PNG (Portable Network Graphics) is an image format that supports lossless compression.
//!
//! # Related Links
//! * <http://www.w3.org/TR/PNG/> - The PNG Specification
//!
use std::convert::TryFrom;
use std::fmt;
use std::io::{self, Read, Write};
use num_rational::Ratio;
use png::{BlendOp, DisposeOp};
use crate::animation::{Delay, Frame, Frames};
use crate::color::{Blend, ColorType, ExtendedColorType};
use crate::error::{
DecodingError, EncodingError, ImageError, ImageResult, LimitError, LimitErrorKind,
ParameterError, ParameterErrorKind, UnsupportedError, UnsupportedErrorKind,
};
use crate::image::{AnimationDecoder, ImageDecoder, ImageEncoder, ImageFormat};
use crate::io::Limits;
use crate::{DynamicImage, GenericImage, ImageBuffer, Luma, LumaA, Rgb, Rgba, RgbaImage};
// http://www.w3.org/TR/PNG-Structure.html
// The first eight bytes of a PNG file always contain the following (decimal) values:
pub(crate) const PNG_SIGNATURE: [u8; 8] = [137, 80, 78, 71, 13, 10, 26, 10];
/// Png Reader
///
/// This reader will try to read the png one row at a time,
/// however for interlaced png files this is not possible and
/// these are therefore read at once.
pub struct PngReader<R: Read> {
reader: png::Reader<R>,
buffer: Vec<u8>,
index: usize,
}
impl<R: Read> PngReader<R> {
fn new(mut reader: png::Reader<R>) -> ImageResult<PngReader<R>> {
let len = reader.output_buffer_size();
// Since interlaced images do not come in
// scanline order it is almost impossible to
// read them in a streaming fashion, however
// this shouldn't be a too big of a problem
// as most interlaced images should fit in memory.
let buffer = if reader.info().interlaced {
let mut buffer = vec![0; len];
reader
.next_frame(&mut buffer)
.map_err(ImageError::from_png)?;
buffer
} else {
Vec::new()
};
Ok(PngReader {
reader,
buffer,
index: 0,
})
}
}
impl<R: Read> Read for PngReader<R> {
fn read(&mut self, mut buf: &mut [u8]) -> io::Result<usize> {
// io::Write::write for slice cannot fail
let readed = buf.write(&self.buffer[self.index..]).unwrap();
let mut bytes = readed;
self.index += readed;
while self.index >= self.buffer.len() {
match self.reader.next_row()? {
Some(row) => {
// Faster to copy directly to external buffer
let readed = buf.write(row.data()).unwrap();
bytes += readed;
self.buffer = row.data()[readed..].to_owned();
self.index = 0;
}
None => return Ok(bytes),
}
}
Ok(bytes)
}
fn read_to_end(&mut self, buf: &mut Vec<u8>) -> io::Result<usize> {
let mut bytes = self.buffer.len();
if buf.is_empty() {
std::mem::swap(&mut self.buffer, buf);
} else {
buf.extend_from_slice(&self.buffer);
self.buffer.clear();
}
self.index = 0;
while let Some(row) = self.reader.next_row()? {
buf.extend_from_slice(row.data());
bytes += row.data().len();
}
Ok(bytes)
}
}
/// PNG decoder
pub struct PngDecoder<R: Read> {
color_type: ColorType,
reader: png::Reader<R>,
}
impl<R: Read> PngDecoder<R> {
/// Creates a new decoder that decodes from the stream ```r```
pub fn new(r: R) -> ImageResult<PngDecoder<R>> {
Self::with_limits(r, Limits::default())
}
/// Creates a new decoder that decodes from the stream ```r``` with the given limits.
pub fn with_limits(r: R, limits: Limits) -> ImageResult<PngDecoder<R>> {
limits.check_support(&crate::io::LimitSupport::default())?;
let max_bytes = usize::try_from(limits.max_alloc.unwrap_or(u64::MAX)).unwrap_or(usize::MAX);
let mut decoder = png::Decoder::new_with_limits(r, png::Limits { bytes: max_bytes });
let info = decoder.read_header_info().map_err(ImageError::from_png)?;
limits.check_dimensions(info.width, info.height)?;
// By default the PNG decoder will scale 16 bpc to 8 bpc, so custom
// transformations must be set. EXPAND preserves the default behavior
// expanding bpc < 8 to 8 bpc.
decoder.set_transformations(png::Transformations::EXPAND);
let reader = decoder.read_info().map_err(ImageError::from_png)?;
let (color_type, bits) = reader.output_color_type();
let color_type = match (color_type, bits) {
(png::ColorType::Grayscale, png::BitDepth::Eight) => ColorType::L8,
(png::ColorType::Grayscale, png::BitDepth::Sixteen) => ColorType::L16,
(png::ColorType::GrayscaleAlpha, png::BitDepth::Eight) => ColorType::La8,
(png::ColorType::GrayscaleAlpha, png::BitDepth::Sixteen) => ColorType::La16,
(png::ColorType::Rgb, png::BitDepth::Eight) => ColorType::Rgb8,
(png::ColorType::Rgb, png::BitDepth::Sixteen) => ColorType::Rgb16,
(png::ColorType::Rgba, png::BitDepth::Eight) => ColorType::Rgba8,
(png::ColorType::Rgba, png::BitDepth::Sixteen) => ColorType::Rgba16,
(png::ColorType::Grayscale, png::BitDepth::One) => {
return Err(unsupported_color(ExtendedColorType::L1))
}
(png::ColorType::GrayscaleAlpha, png::BitDepth::One) => {
return Err(unsupported_color(ExtendedColorType::La1))
}
(png::ColorType::Rgb, png::BitDepth::One) => {
return Err(unsupported_color(ExtendedColorType::Rgb1))
}
(png::ColorType::Rgba, png::BitDepth::One) => {
return Err(unsupported_color(ExtendedColorType::Rgba1))
}
(png::ColorType::Grayscale, png::BitDepth::Two) => {
return Err(unsupported_color(ExtendedColorType::L2))
}
(png::ColorType::GrayscaleAlpha, png::BitDepth::Two) => {
return Err(unsupported_color(ExtendedColorType::La2))
}
(png::ColorType::Rgb, png::BitDepth::Two) => {
return Err(unsupported_color(ExtendedColorType::Rgb2))
}
(png::ColorType::Rgba, png::BitDepth::Two) => {
return Err(unsupported_color(ExtendedColorType::Rgba2))
}
(png::ColorType::Grayscale, png::BitDepth::Four) => {
return Err(unsupported_color(ExtendedColorType::L4))
}
(png::ColorType::GrayscaleAlpha, png::BitDepth::Four) => {
return Err(unsupported_color(ExtendedColorType::La4))
}
(png::ColorType::Rgb, png::BitDepth::Four) => {
return Err(unsupported_color(ExtendedColorType::Rgb4))
}
(png::ColorType::Rgba, png::BitDepth::Four) => {
return Err(unsupported_color(ExtendedColorType::Rgba4))
}
(png::ColorType::Indexed, bits) => {
return Err(unsupported_color(ExtendedColorType::Unknown(bits as u8)))
}
};
Ok(PngDecoder { color_type, reader })
}
/// Turn this into an iterator over the animation frames.
///
/// Reading the complete animation requires more memory than reading the data from the IDAT
/// framemultiple frame buffers need to be reserved at the same time. We further do not
/// support compositing 16-bit colors. In any case this would be lossy as the interface of
/// animation decoders does not support 16-bit colors.
///
/// If something is not supported or a limit is violated then the decoding step that requires
/// them will fail and an error will be returned instead of the frame. No further frames will
/// be returned.
pub fn apng(self) -> ApngDecoder<R> {
ApngDecoder::new(self)
}
/// Returns if the image contains an animation.
///
/// Note that the file itself decides if the default image is considered to be part of the
/// animation. When it is not the common interpretation is to use it as a thumbnail.
///
/// If a non-animated image is converted into an `ApngDecoder` then its iterator is empty.
pub fn is_apng(&self) -> bool {
self.reader.info().animation_control.is_some()
}
}
fn unsupported_color(ect: ExtendedColorType) -> ImageError {
ImageError::Unsupported(UnsupportedError::from_format_and_kind(
ImageFormat::Png.into(),
UnsupportedErrorKind::Color(ect),
))
}
impl<'a, R: 'a + Read> ImageDecoder<'a> for PngDecoder<R> {
type Reader = PngReader<R>;
fn dimensions(&self) -> (u32, u32) {
self.reader.info().size()
}
fn color_type(&self) -> ColorType {
self.color_type
}
fn icc_profile(&mut self) -> Option<Vec<u8>> {
self.reader.info().icc_profile.as_ref().map(|x| x.to_vec())
}
fn into_reader(self) -> ImageResult<Self::Reader> {
PngReader::new(self.reader)
}
fn read_image(mut self, buf: &mut [u8]) -> ImageResult<()> {
use byteorder::{BigEndian, ByteOrder, NativeEndian};
assert_eq!(u64::try_from(buf.len()), Ok(self.total_bytes()));
self.reader.next_frame(buf).map_err(ImageError::from_png)?;
// PNG images are big endian. For 16 bit per channel and larger types,
// the buffer may need to be reordered to native endianness per the
// contract of `read_image`.
// TODO: assumes equal channel bit depth.
let bpc = self.color_type().bytes_per_pixel() / self.color_type().channel_count();
match bpc {
1 => (), // No reodering necessary for u8
2 => buf.chunks_mut(2).for_each(|c| {
let v = BigEndian::read_u16(c);
NativeEndian::write_u16(c, v)
}),
_ => unreachable!(),
}
Ok(())
}
fn scanline_bytes(&self) -> u64 {
let width = self.reader.info().width;
self.reader.output_line_size(width) as u64
}
}
/// An [`AnimationDecoder`] adapter of [`PngDecoder`].
///
/// See [`PngDecoder::apng`] for more information.
///
/// [`AnimationDecoder`]: ../trait.AnimationDecoder.html
/// [`PngDecoder`]: struct.PngDecoder.html
/// [`PngDecoder::apng`]: struct.PngDecoder.html#method.apng
pub struct ApngDecoder<R: Read> {
inner: PngDecoder<R>,
/// The current output buffer.
current: RgbaImage,
/// The previous output buffer, used for dispose op previous.
previous: RgbaImage,
/// The dispose op of the current frame.
dispose: DisposeOp,
/// The number of image still expected to be able to load.
remaining: u32,
/// The next (first) image is the thumbnail.
has_thumbnail: bool,
}
impl<R: Read> ApngDecoder<R> {
fn new(inner: PngDecoder<R>) -> Self {
let (width, height) = inner.dimensions();
let info = inner.reader.info();
let remaining = match info.animation_control() {
// The expected number of fcTL in the remaining image.
Some(actl) => actl.num_frames,
None => 0,
};
// If the IDAT has no fcTL then it is not part of the animation counted by
// num_frames. All following fdAT chunks must be preceded by an fcTL
let has_thumbnail = info.frame_control.is_none();
ApngDecoder {
inner,
// TODO: should we delay this allocation? At least if we support limits we should.
current: RgbaImage::new(width, height),
previous: RgbaImage::new(width, height),
dispose: DisposeOp::Background,
remaining,
has_thumbnail,
}
}
// TODO: thumbnail(&mut self) -> Option<impl ImageDecoder<'_>>
/// Decode one subframe and overlay it on the canvas.
fn mix_next_frame(&mut self) -> Result<Option<&RgbaImage>, ImageError> {
// Remove this image from remaining.
self.remaining = match self.remaining.checked_sub(1) {
None => return Ok(None),
Some(next) => next,
};
// Shorten ourselves to 0 in case of error.
let remaining = self.remaining;
self.remaining = 0;
// Skip the thumbnail that is not part of the animation.
if self.has_thumbnail {
self.has_thumbnail = false;
let mut buffer = vec![0; self.inner.reader.output_buffer_size()];
self.inner
.reader
.next_frame(&mut buffer)
.map_err(ImageError::from_png)?;
}
self.animatable_color_type()?;
// Dispose of the previous frame.
match self.dispose {
DisposeOp::None => {
self.previous.clone_from(&self.current);
}
DisposeOp::Background => {
self.previous.clone_from(&self.current);
self.current
.pixels_mut()
.for_each(|pixel| *pixel = Rgba([0, 0, 0, 0]));
}
DisposeOp::Previous => {
self.current.clone_from(&self.previous);
}
}
// Read next frame data.
let mut buffer = vec![0; self.inner.reader.output_buffer_size()];
self.inner
.reader
.next_frame(&mut buffer)
.map_err(ImageError::from_png)?;
let info = self.inner.reader.info();
// Find out how to interpret the decoded frame.
let (width, height, px, py, blend);
match info.frame_control() {
None => {
width = info.width;
height = info.height;
px = 0;
py = 0;
blend = BlendOp::Source;
}
Some(fc) => {
width = fc.width;
height = fc.height;
px = fc.x_offset;
py = fc.y_offset;
blend = fc.blend_op;
self.dispose = fc.dispose_op;
}
};
// Turn the data into an rgba image proper.
let source = match self.inner.color_type {
ColorType::L8 => {
let image = ImageBuffer::<Luma<_>, _>::from_raw(width, height, buffer).unwrap();
DynamicImage::ImageLuma8(image).into_rgba8()
}
ColorType::La8 => {
let image = ImageBuffer::<LumaA<_>, _>::from_raw(width, height, buffer).unwrap();
DynamicImage::ImageLumaA8(image).into_rgba8()
}
ColorType::Rgb8 => {
let image = ImageBuffer::<Rgb<_>, _>::from_raw(width, height, buffer).unwrap();
DynamicImage::ImageRgb8(image).into_rgba8()
}
ColorType::Rgba8 => ImageBuffer::<Rgba<_>, _>::from_raw(width, height, buffer).unwrap(),
ColorType::L16 | ColorType::Rgb16 | ColorType::La16 | ColorType::Rgba16 => {
// TODO: to enable remove restriction in `animatable_color_type` method.
unreachable!("16-bit apng not yet support")
}
_ => unreachable!("Invalid png color"),
};
match blend {
BlendOp::Source => {
self.current
.copy_from(&source, px, py)
.expect("Invalid png image not detected in png");
}
BlendOp::Over => {
// TODO: investigate speed, speed-ups, and bounds-checks.
for (x, y, p) in source.enumerate_pixels() {
self.current.get_pixel_mut(x + px, y + py).blend(p);
}
}
}
// Ok, we can proceed with actually remaining images.
self.remaining = remaining;
// Return composited output buffer.
Ok(Some(&self.current))
}
fn animatable_color_type(&self) -> Result<(), ImageError> {
match self.inner.color_type {
ColorType::L8 | ColorType::Rgb8 | ColorType::La8 | ColorType::Rgba8 => Ok(()),
// TODO: do not handle multi-byte colors. Remember to implement it in `mix_next_frame`.
ColorType::L16 | ColorType::Rgb16 | ColorType::La16 | ColorType::Rgba16 => {
Err(unsupported_color(self.inner.color_type.into()))
}
_ => unreachable!("{:?} not a valid png color", self.inner.color_type),
}
}
}
impl<'a, R: Read + 'a> AnimationDecoder<'a> for ApngDecoder<R> {
fn into_frames(self) -> Frames<'a> {
struct FrameIterator<R: Read>(ApngDecoder<R>);
impl<R: Read> Iterator for FrameIterator<R> {
type Item = ImageResult<Frame>;
fn next(&mut self) -> Option<Self::Item> {
let image = match self.0.mix_next_frame() {
Ok(Some(image)) => image.clone(),
Ok(None) => return None,
Err(err) => return Some(Err(err)),
};
let info = self.0.inner.reader.info();
let fc = info.frame_control().unwrap();
// PNG delays are rations in seconds.
let num = u32::from(fc.delay_num) * 1_000u32;
let denom = match fc.delay_den {
// The standard dictates to replace by 100 when the denominator is 0.
0 => 100,
d => u32::from(d),
};
let delay = Delay::from_ratio(Ratio::new(num, denom));
Some(Ok(Frame::from_parts(image, 0, 0, delay)))
}
}
Frames::new(Box::new(FrameIterator(self)))
}
}
/// PNG encoder
pub struct PngEncoder<W: Write> {
w: W,
compression: CompressionType,
filter: FilterType,
}
/// Compression level of a PNG encoder. The default setting is `Fast`.
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
#[non_exhaustive]
pub enum CompressionType {
/// Default compression level
Default,
/// Fast, minimal compression
Fast,
/// High compression level
Best,
/// Huffman coding compression
#[deprecated(note = "use one of the other compression levels instead, such as 'Fast'")]
Huffman,
/// Run-length encoding compression
#[deprecated(note = "use one of the other compression levels instead, such as 'Fast'")]
Rle,
}
/// Filter algorithms used to process image data to improve compression.
///
/// The default filter is `Adaptive`.
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
#[non_exhaustive]
pub enum FilterType {
/// No processing done, best used for low bit depth grayscale or data with a
/// low color count
NoFilter,
/// Filters based on previous pixel in the same scanline
Sub,
/// Filters based on the scanline above
Up,
/// Filters based on the average of left and right neighbor pixels
Avg,
/// Algorithm that takes into account the left, upper left, and above pixels
Paeth,
/// Uses a heuristic to select one of the preceding filters for each
/// scanline rather than one filter for the entire image
Adaptive,
}
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
#[non_exhaustive]
enum BadPngRepresentation {
ColorType(ColorType),
}
impl<W: Write> PngEncoder<W> {
/// Create a new encoder that writes its output to ```w```
pub fn new(w: W) -> PngEncoder<W> {
PngEncoder {
w,
compression: CompressionType::default(),
filter: FilterType::default(),
}
}
/// Create a new encoder that writes its output to `w` with `CompressionType` `compression` and
/// `FilterType` `filter`.
///
/// It is best to view the options as a _hint_ to the implementation on the smallest or fastest
/// option for encoding a particular image. That is, using options that map directly to a PNG
/// image parameter will use this parameter where possible. But variants that have no direct
/// mapping may be interpreted differently in minor versions. The exact output is expressly
/// __not__ part the SemVer stability guarantee.
///
/// Note that it is not optimal to use a single filter type, so an adaptive
/// filter type is selected as the default. The filter which best minimizes
/// file size may change with the type of compression used.
pub fn new_with_quality(
w: W,
compression: CompressionType,
filter: FilterType,
) -> PngEncoder<W> {
PngEncoder {
w,
compression,
filter,
}
}
/// Encodes the image `data` that has dimensions `width` and `height` and `ColorType` `c`.
///
/// Expects data in big endian.
#[deprecated = "Use `PngEncoder::write_image` instead. Beware that `write_image` has a different endianness convention"]
pub fn encode(self, data: &[u8], width: u32, height: u32, color: ColorType) -> ImageResult<()> {
self.encode_inner(data, width, height, color)
}
fn encode_inner(
self,
data: &[u8],
width: u32,
height: u32,
color: ColorType,
) -> ImageResult<()> {
let (ct, bits) = match color {
ColorType::L8 => (png::ColorType::Grayscale, png::BitDepth::Eight),
ColorType::L16 => (png::ColorType::Grayscale, png::BitDepth::Sixteen),
ColorType::La8 => (png::ColorType::GrayscaleAlpha, png::BitDepth::Eight),
ColorType::La16 => (png::ColorType::GrayscaleAlpha, png::BitDepth::Sixteen),
ColorType::Rgb8 => (png::ColorType::Rgb, png::BitDepth::Eight),
ColorType::Rgb16 => (png::ColorType::Rgb, png::BitDepth::Sixteen),
ColorType::Rgba8 => (png::ColorType::Rgba, png::BitDepth::Eight),
ColorType::Rgba16 => (png::ColorType::Rgba, png::BitDepth::Sixteen),
_ => {
return Err(ImageError::Unsupported(
UnsupportedError::from_format_and_kind(
ImageFormat::Png.into(),
UnsupportedErrorKind::Color(color.into()),
),
))
}
};
let comp = match self.compression {
CompressionType::Default => png::Compression::Default,
CompressionType::Best => png::Compression::Best,
_ => png::Compression::Fast,
};
let (filter, adaptive_filter) = match self.filter {
FilterType::NoFilter => (
png::FilterType::NoFilter,
png::AdaptiveFilterType::NonAdaptive,
),
FilterType::Sub => (png::FilterType::Sub, png::AdaptiveFilterType::NonAdaptive),
FilterType::Up => (png::FilterType::Up, png::AdaptiveFilterType::NonAdaptive),
FilterType::Avg => (png::FilterType::Avg, png::AdaptiveFilterType::NonAdaptive),
FilterType::Paeth => (png::FilterType::Paeth, png::AdaptiveFilterType::NonAdaptive),
FilterType::Adaptive => (png::FilterType::Sub, png::AdaptiveFilterType::Adaptive),
};
let mut encoder = png::Encoder::new(self.w, width, height);
encoder.set_color(ct);
encoder.set_depth(bits);
encoder.set_compression(comp);
encoder.set_filter(filter);
encoder.set_adaptive_filter(adaptive_filter);
let mut writer = encoder
.write_header()
.map_err(|e| ImageError::IoError(e.into()))?;
writer
.write_image_data(data)
.map_err(|e| ImageError::IoError(e.into()))
}
}
impl<W: Write> ImageEncoder for PngEncoder<W> {
/// Write a PNG image with the specified width, height, and color type.
///
/// For color types with 16-bit per channel or larger, the contents of `buf` should be in
/// native endian. PngEncoder will automatically convert to big endian as required by the
/// underlying PNG format.
fn write_image(
self,
buf: &[u8],
width: u32,
height: u32,
color_type: ColorType,
) -> ImageResult<()> {
use byteorder::{BigEndian, ByteOrder, NativeEndian};
use ColorType::*;
// PNG images are big endian. For 16 bit per channel and larger types,
// the buffer may need to be reordered to big endian per the
// contract of `write_image`.
// TODO: assumes equal channel bit depth.
match color_type {
L8 | La8 | Rgb8 | Rgba8 => {
// No reodering necessary for u8
self.encode_inner(buf, width, height, color_type)
}
L16 | La16 | Rgb16 | Rgba16 => {
// Because the buffer is immutable and the PNG encoder does not
// yet take Write/Read traits, create a temporary buffer for
// big endian reordering.
let mut reordered = vec![0; buf.len()];
buf.chunks(2)
.zip(reordered.chunks_mut(2))
.for_each(|(b, r)| BigEndian::write_u16(r, NativeEndian::read_u16(b)));
self.encode_inner(&reordered, width, height, color_type)
}
_ => Err(ImageError::Encoding(EncodingError::new(
ImageFormat::Png.into(),
BadPngRepresentation::ColorType(color_type),
))),
}
}
}
impl ImageError {
fn from_png(err: png::DecodingError) -> ImageError {
use png::DecodingError::*;
match err {
IoError(err) => ImageError::IoError(err),
// The input image was not a valid PNG.
err @ Format(_) => {
ImageError::Decoding(DecodingError::new(ImageFormat::Png.into(), err))
}
// Other is used when:
// - The decoder is polled for more animation frames despite being done (or not being animated
// in the first place).
// - The output buffer does not have the required size.
err @ Parameter(_) => ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::Generic(err.to_string()),
)),
LimitsExceeded => {
ImageError::Limits(LimitError::from_kind(LimitErrorKind::InsufficientMemory))
}
}
}
}
impl Default for CompressionType {
fn default() -> Self {
CompressionType::Fast
}
}
impl Default for FilterType {
fn default() -> Self {
FilterType::Adaptive
}
}
impl fmt::Display for BadPngRepresentation {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
Self::ColorType(color_type) => write!(
f,
"The color {:?} can not be represented in PNG.",
color_type
),
}
}
}
impl std::error::Error for BadPngRepresentation {}
#[cfg(test)]
mod tests {
use super::*;
use crate::image::ImageDecoder;
use crate::ImageOutputFormat;
use std::io::{Cursor, Read};
#[test]
fn ensure_no_decoder_off_by_one() {
let dec = PngDecoder::new(
std::fs::File::open("tests/images/png/bugfixes/debug_triangle_corners_widescreen.png")
.unwrap(),
)
.expect("Unable to read PNG file (does it exist?)");
assert_eq![(2000, 1000), dec.dimensions()];
assert_eq![
ColorType::Rgb8,
dec.color_type(),
"Image MUST have the Rgb8 format"
];
let correct_bytes = dec
.into_reader()
.expect("Unable to read file")
.bytes()
.map(|x| x.expect("Unable to read byte"))
.collect::<Vec<u8>>();
assert_eq![6_000_000, correct_bytes.len()];
}
#[test]
fn underlying_error() {
use std::error::Error;
let mut not_png =
std::fs::read("tests/images/png/bugfixes/debug_triangle_corners_widescreen.png")
.unwrap();
not_png[0] = 0;
let error = PngDecoder::new(&not_png[..]).err().unwrap();
let _ = error
.source()
.unwrap()
.downcast_ref::<png::DecodingError>()
.expect("Caused by a png error");
}
#[test]
fn encode_bad_color_type() {
// regression test for issue #1663
let image = DynamicImage::new_rgb32f(1, 1);
let mut target = Cursor::new(vec![]);
let _ = image.write_to(&mut target, ImageOutputFormat::Png);
}
}

124
vendor/image/src/codecs/pnm/autobreak.rs vendored Normal file
View File

@@ -0,0 +1,124 @@
//! Insert line breaks between written buffers when they would overflow the line length.
use std::io;
// The pnm standard says to insert line breaks after 70 characters. Assumes that no line breaks
// are actually written. We have to be careful to fully commit buffers or not commit them at all,
// otherwise we might insert a newline in the middle of a token.
pub(crate) struct AutoBreak<W: io::Write> {
wrapped: W,
line_capacity: usize,
line: Vec<u8>,
has_newline: bool,
panicked: bool, // see https://github.com/rust-lang/rust/issues/30888
}
impl<W: io::Write> AutoBreak<W> {
pub(crate) fn new(writer: W, line_capacity: usize) -> Self {
AutoBreak {
wrapped: writer,
line_capacity,
line: Vec::with_capacity(line_capacity + 1),
has_newline: false,
panicked: false,
}
}
fn flush_buf(&mut self) -> io::Result<()> {
// from BufWriter
let mut written = 0;
let len = self.line.len();
let mut ret = Ok(());
while written < len {
self.panicked = true;
let r = self.wrapped.write(&self.line[written..]);
self.panicked = false;
match r {
Ok(0) => {
ret = Err(io::Error::new(
io::ErrorKind::WriteZero,
"failed to write the buffered data",
));
break;
}
Ok(n) => written += n,
Err(ref e) if e.kind() == io::ErrorKind::Interrupted => {}
Err(e) => {
ret = Err(e);
break;
}
}
}
if written > 0 {
self.line.drain(..written);
}
ret
}
}
impl<W: io::Write> io::Write for AutoBreak<W> {
fn write(&mut self, buffer: &[u8]) -> io::Result<usize> {
if self.has_newline {
self.flush()?;
self.has_newline = false;
}
if !self.line.is_empty() && self.line.len() + buffer.len() > self.line_capacity {
self.line.push(b'\n');
self.has_newline = true;
self.flush()?;
self.has_newline = false;
}
self.line.extend_from_slice(buffer);
Ok(buffer.len())
}
fn flush(&mut self) -> io::Result<()> {
self.flush_buf()?;
self.wrapped.flush()
}
}
impl<W: io::Write> Drop for AutoBreak<W> {
fn drop(&mut self) {
if !self.panicked {
let _r = self.flush_buf();
// internal writer flushed automatically by Drop
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::io::Write;
#[test]
fn test_aligned_writes() {
let mut output = Vec::new();
{
let mut writer = AutoBreak::new(&mut output, 10);
writer.write_all(b"0123456789").unwrap();
writer.write_all(b"0123456789").unwrap();
}
assert_eq!(output.as_slice(), b"0123456789\n0123456789");
}
#[test]
fn test_greater_writes() {
let mut output = Vec::new();
{
let mut writer = AutoBreak::new(&mut output, 10);
writer.write_all(b"012").unwrap();
writer.write_all(b"345").unwrap();
writer.write_all(b"0123456789").unwrap();
writer.write_all(b"012345678910").unwrap();
writer.write_all(b"_").unwrap();
}
assert_eq!(output.as_slice(), b"012345\n0123456789\n012345678910\n_");
}
}

1272
vendor/image/src/codecs/pnm/decoder.rs vendored Normal file

File diff suppressed because it is too large Load Diff

673
vendor/image/src/codecs/pnm/encoder.rs vendored Normal file
View File

@@ -0,0 +1,673 @@
//! Encoding of PNM Images
use std::fmt;
use std::io;
use std::io::Write;
use super::AutoBreak;
use super::{ArbitraryHeader, ArbitraryTuplType, BitmapHeader, GraymapHeader, PixmapHeader};
use super::{HeaderRecord, PnmHeader, PnmSubtype, SampleEncoding};
use crate::color::{ColorType, ExtendedColorType};
use crate::error::{
ImageError, ImageResult, ParameterError, ParameterErrorKind, UnsupportedError,
UnsupportedErrorKind,
};
use crate::image::{ImageEncoder, ImageFormat};
use byteorder::{BigEndian, WriteBytesExt};
enum HeaderStrategy {
Dynamic,
Subtype(PnmSubtype),
Chosen(PnmHeader),
}
#[derive(Clone, Copy)]
pub enum FlatSamples<'a> {
U8(&'a [u8]),
U16(&'a [u16]),
}
/// Encodes images to any of the `pnm` image formats.
pub struct PnmEncoder<W: Write> {
writer: W,
header: HeaderStrategy,
}
/// Encapsulate the checking system in the type system. Non of the fields are actually accessed
/// but requiring them forces us to validly construct the struct anyways.
struct CheckedImageBuffer<'a> {
_image: FlatSamples<'a>,
_width: u32,
_height: u32,
_color: ExtendedColorType,
}
// Check the header against the buffer. Each struct produces the next after a check.
struct UncheckedHeader<'a> {
header: &'a PnmHeader,
}
struct CheckedDimensions<'a> {
unchecked: UncheckedHeader<'a>,
width: u32,
height: u32,
}
struct CheckedHeaderColor<'a> {
dimensions: CheckedDimensions<'a>,
color: ExtendedColorType,
}
struct CheckedHeader<'a> {
color: CheckedHeaderColor<'a>,
encoding: TupleEncoding<'a>,
_image: CheckedImageBuffer<'a>,
}
enum TupleEncoding<'a> {
PbmBits {
samples: FlatSamples<'a>,
width: u32,
},
Ascii {
samples: FlatSamples<'a>,
},
Bytes {
samples: FlatSamples<'a>,
},
}
impl<W: Write> PnmEncoder<W> {
/// Create new PnmEncoder from the `writer`.
///
/// The encoded images will have some `pnm` format. If more control over the image type is
/// required, use either one of `with_subtype` or `with_header`. For more information on the
/// behaviour, see `with_dynamic_header`.
pub fn new(writer: W) -> Self {
PnmEncoder {
writer,
header: HeaderStrategy::Dynamic,
}
}
/// Encode a specific pnm subtype image.
///
/// The magic number and encoding type will be chosen as provided while the rest of the header
/// data will be generated dynamically. Trying to encode incompatible images (e.g. encoding an
/// RGB image as Graymap) will result in an error.
///
/// This will overwrite the effect of earlier calls to `with_header` and `with_dynamic_header`.
pub fn with_subtype(self, subtype: PnmSubtype) -> Self {
PnmEncoder {
writer: self.writer,
header: HeaderStrategy::Subtype(subtype),
}
}
/// Enforce the use of a chosen header.
///
/// While this option gives the most control over the actual written data, the encoding process
/// will error in case the header data and image parameters do not agree. It is the users
/// obligation to ensure that the width and height are set accordingly, for example.
///
/// Choose this option if you want a lossless decoding/encoding round trip.
///
/// This will overwrite the effect of earlier calls to `with_subtype` and `with_dynamic_header`.
pub fn with_header(self, header: PnmHeader) -> Self {
PnmEncoder {
writer: self.writer,
header: HeaderStrategy::Chosen(header),
}
}
/// Create the header dynamically for each image.
///
/// This is the default option upon creation of the encoder. With this, most images should be
/// encodable but the specific format chosen is out of the users control. The pnm subtype is
/// chosen arbitrarily by the library.
///
/// This will overwrite the effect of earlier calls to `with_subtype` and `with_header`.
pub fn with_dynamic_header(self) -> Self {
PnmEncoder {
writer: self.writer,
header: HeaderStrategy::Dynamic,
}
}
/// Encode an image whose samples are represented as `u8`.
///
/// Some `pnm` subtypes are incompatible with some color options, a chosen header most
/// certainly with any deviation from the original decoded image.
pub fn encode<'s, S>(
&mut self,
image: S,
width: u32,
height: u32,
color: ColorType,
) -> ImageResult<()>
where
S: Into<FlatSamples<'s>>,
{
let image = image.into();
match self.header {
HeaderStrategy::Dynamic => {
self.write_dynamic_header(image, width, height, color.into())
}
HeaderStrategy::Subtype(subtype) => {
self.write_subtyped_header(subtype, image, width, height, color.into())
}
HeaderStrategy::Chosen(ref header) => Self::write_with_header(
&mut self.writer,
header,
image,
width,
height,
color.into(),
),
}
}
/// Choose any valid pnm format that the image can be expressed in and write its header.
///
/// Returns how the body should be written if successful.
fn write_dynamic_header(
&mut self,
image: FlatSamples,
width: u32,
height: u32,
color: ExtendedColorType,
) -> ImageResult<()> {
let depth = u32::from(color.channel_count());
let (maxval, tupltype) = match color {
ExtendedColorType::L1 => (1, ArbitraryTuplType::BlackAndWhite),
ExtendedColorType::L8 => (0xff, ArbitraryTuplType::Grayscale),
ExtendedColorType::L16 => (0xffff, ArbitraryTuplType::Grayscale),
ExtendedColorType::La1 => (1, ArbitraryTuplType::BlackAndWhiteAlpha),
ExtendedColorType::La8 => (0xff, ArbitraryTuplType::GrayscaleAlpha),
ExtendedColorType::La16 => (0xffff, ArbitraryTuplType::GrayscaleAlpha),
ExtendedColorType::Rgb8 => (0xff, ArbitraryTuplType::RGB),
ExtendedColorType::Rgb16 => (0xffff, ArbitraryTuplType::RGB),
ExtendedColorType::Rgba8 => (0xff, ArbitraryTuplType::RGBAlpha),
ExtendedColorType::Rgba16 => (0xffff, ArbitraryTuplType::RGBAlpha),
_ => {
return Err(ImageError::Unsupported(
UnsupportedError::from_format_and_kind(
ImageFormat::Pnm.into(),
UnsupportedErrorKind::Color(color),
),
))
}
};
let header = PnmHeader {
decoded: HeaderRecord::Arbitrary(ArbitraryHeader {
width,
height,
depth,
maxval,
tupltype: Some(tupltype),
}),
encoded: None,
};
Self::write_with_header(&mut self.writer, &header, image, width, height, color)
}
/// Try to encode the image with the chosen format, give its corresponding pixel encoding type.
fn write_subtyped_header(
&mut self,
subtype: PnmSubtype,
image: FlatSamples,
width: u32,
height: u32,
color: ExtendedColorType,
) -> ImageResult<()> {
let header = match (subtype, color) {
(PnmSubtype::ArbitraryMap, color) => {
return self.write_dynamic_header(image, width, height, color)
}
(PnmSubtype::Pixmap(encoding), ExtendedColorType::Rgb8) => PnmHeader {
decoded: HeaderRecord::Pixmap(PixmapHeader {
encoding,
width,
height,
maxval: 255,
}),
encoded: None,
},
(PnmSubtype::Graymap(encoding), ExtendedColorType::L8) => PnmHeader {
decoded: HeaderRecord::Graymap(GraymapHeader {
encoding,
width,
height,
maxwhite: 255,
}),
encoded: None,
},
(PnmSubtype::Bitmap(encoding), ExtendedColorType::L8)
| (PnmSubtype::Bitmap(encoding), ExtendedColorType::L1) => PnmHeader {
decoded: HeaderRecord::Bitmap(BitmapHeader {
encoding,
width,
height,
}),
encoded: None,
},
(_, _) => {
return Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::Generic(
"Color type can not be represented in the chosen format".to_owned(),
),
)));
}
};
Self::write_with_header(&mut self.writer, &header, image, width, height, color)
}
/// Try to encode the image with the chosen header, checking if values are correct.
///
/// Returns how the body should be written if successful.
fn write_with_header(
writer: &mut dyn Write,
header: &PnmHeader,
image: FlatSamples,
width: u32,
height: u32,
color: ExtendedColorType,
) -> ImageResult<()> {
let unchecked = UncheckedHeader { header };
unchecked
.check_header_dimensions(width, height)?
.check_header_color(color)?
.check_sample_values(image)?
.write_header(writer)?
.write_image(writer)
}
}
impl<W: Write> ImageEncoder for PnmEncoder<W> {
fn write_image(
mut self,
buf: &[u8],
width: u32,
height: u32,
color_type: ColorType,
) -> ImageResult<()> {
self.encode(buf, width, height, color_type)
}
}
impl<'a> CheckedImageBuffer<'a> {
fn check(
image: FlatSamples<'a>,
width: u32,
height: u32,
color: ExtendedColorType,
) -> ImageResult<CheckedImageBuffer<'a>> {
let components = color.channel_count() as usize;
let uwidth = width as usize;
let uheight = height as usize;
let expected_len = components
.checked_mul(uwidth)
.and_then(|v| v.checked_mul(uheight));
if Some(image.len()) != expected_len {
// Image buffer does not correspond to size and colour.
return Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::DimensionMismatch,
)));
}
Ok(CheckedImageBuffer {
_image: image,
_width: width,
_height: height,
_color: color,
})
}
}
impl<'a> UncheckedHeader<'a> {
fn check_header_dimensions(
self,
width: u32,
height: u32,
) -> ImageResult<CheckedDimensions<'a>> {
if self.header.width() != width || self.header.height() != height {
// Chosen header does not match Image dimensions.
return Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::DimensionMismatch,
)));
}
Ok(CheckedDimensions {
unchecked: self,
width,
height,
})
}
}
impl<'a> CheckedDimensions<'a> {
// Check color compatibility with the header. This will only error when we are certain that
// the combination is bogus (e.g. combining Pixmap and Palette) but allows uncertain
// combinations (basically a ArbitraryTuplType::Custom with any color of fitting depth).
fn check_header_color(self, color: ExtendedColorType) -> ImageResult<CheckedHeaderColor<'a>> {
let components = u32::from(color.channel_count());
match *self.unchecked.header {
PnmHeader {
decoded: HeaderRecord::Bitmap(_),
..
} => match color {
ExtendedColorType::L1 | ExtendedColorType::L8 | ExtendedColorType::L16 => (),
_ => {
return Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::Generic(
"PBM format only support luma color types".to_owned(),
),
)))
}
},
PnmHeader {
decoded: HeaderRecord::Graymap(_),
..
} => match color {
ExtendedColorType::L1 | ExtendedColorType::L8 | ExtendedColorType::L16 => (),
_ => {
return Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::Generic(
"PGM format only support luma color types".to_owned(),
),
)))
}
},
PnmHeader {
decoded: HeaderRecord::Pixmap(_),
..
} => match color {
ExtendedColorType::Rgb8 => (),
_ => {
return Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::Generic(
"PPM format only support ExtendedColorType::Rgb8".to_owned(),
),
)))
}
},
PnmHeader {
decoded:
HeaderRecord::Arbitrary(ArbitraryHeader {
depth,
ref tupltype,
..
}),
..
} => match (tupltype, color) {
(&Some(ArbitraryTuplType::BlackAndWhite), ExtendedColorType::L1) => (),
(&Some(ArbitraryTuplType::BlackAndWhiteAlpha), ExtendedColorType::La8) => (),
(&Some(ArbitraryTuplType::Grayscale), ExtendedColorType::L1) => (),
(&Some(ArbitraryTuplType::Grayscale), ExtendedColorType::L8) => (),
(&Some(ArbitraryTuplType::Grayscale), ExtendedColorType::L16) => (),
(&Some(ArbitraryTuplType::GrayscaleAlpha), ExtendedColorType::La8) => (),
(&Some(ArbitraryTuplType::RGB), ExtendedColorType::Rgb8) => (),
(&Some(ArbitraryTuplType::RGBAlpha), ExtendedColorType::Rgba8) => (),
(&None, _) if depth == components => (),
(&Some(ArbitraryTuplType::Custom(_)), _) if depth == components => (),
_ if depth != components => {
return Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::Generic(format!(
"Depth mismatch: header {} vs. color {}",
depth, components
)),
)))
}
_ => {
return Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::Generic(
"Invalid color type for selected PAM color type".to_owned(),
),
)))
}
},
}
Ok(CheckedHeaderColor {
dimensions: self,
color,
})
}
}
impl<'a> CheckedHeaderColor<'a> {
fn check_sample_values(self, image: FlatSamples<'a>) -> ImageResult<CheckedHeader<'a>> {
let header_maxval = match self.dimensions.unchecked.header.decoded {
HeaderRecord::Bitmap(_) => 1,
HeaderRecord::Graymap(GraymapHeader { maxwhite, .. }) => maxwhite,
HeaderRecord::Pixmap(PixmapHeader { maxval, .. }) => maxval,
HeaderRecord::Arbitrary(ArbitraryHeader { maxval, .. }) => maxval,
};
// We trust the image color bit count to be correct at least.
let max_sample = match self.color {
ExtendedColorType::Unknown(n) if n <= 16 => (1 << n) - 1,
ExtendedColorType::L1 => 1,
ExtendedColorType::L8
| ExtendedColorType::La8
| ExtendedColorType::Rgb8
| ExtendedColorType::Rgba8
| ExtendedColorType::Bgr8
| ExtendedColorType::Bgra8 => 0xff,
ExtendedColorType::L16
| ExtendedColorType::La16
| ExtendedColorType::Rgb16
| ExtendedColorType::Rgba16 => 0xffff,
_ => {
// Unsupported target color type.
return Err(ImageError::Unsupported(
UnsupportedError::from_format_and_kind(
ImageFormat::Pnm.into(),
UnsupportedErrorKind::Color(self.color),
),
));
}
};
// Avoid the performance heavy check if possible, e.g. if the header has been chosen by us.
if header_maxval < max_sample && !image.all_smaller(header_maxval) {
// Sample value greater than allowed for chosen header.
return Err(ImageError::Unsupported(
UnsupportedError::from_format_and_kind(
ImageFormat::Pnm.into(),
UnsupportedErrorKind::GenericFeature(
"Sample value greater than allowed for chosen header".to_owned(),
),
),
));
}
let encoding = image.encoding_for(&self.dimensions.unchecked.header.decoded);
let image = CheckedImageBuffer::check(
image,
self.dimensions.width,
self.dimensions.height,
self.color,
)?;
Ok(CheckedHeader {
color: self,
encoding,
_image: image,
})
}
}
impl<'a> CheckedHeader<'a> {
fn write_header(self, writer: &mut dyn Write) -> ImageResult<TupleEncoding<'a>> {
self.header().write(writer)?;
Ok(self.encoding)
}
fn header(&self) -> &PnmHeader {
self.color.dimensions.unchecked.header
}
}
struct SampleWriter<'a>(&'a mut dyn Write);
impl<'a> SampleWriter<'a> {
fn write_samples_ascii<V>(self, samples: V) -> io::Result<()>
where
V: Iterator,
V::Item: fmt::Display,
{
let mut auto_break_writer = AutoBreak::new(self.0, 70);
for value in samples {
write!(auto_break_writer, "{} ", value)?;
}
auto_break_writer.flush()
}
fn write_pbm_bits<V>(self, samples: &[V], width: u32) -> io::Result<()>
/* Default gives 0 for all primitives. TODO: replace this with `Zeroable` once it hits stable */
where
V: Default + Eq + Copy,
{
// The length of an encoded scanline
let line_width = (width - 1) / 8 + 1;
// We'll be writing single bytes, so buffer
let mut line_buffer = Vec::with_capacity(line_width as usize);
for line in samples.chunks(width as usize) {
for byte_bits in line.chunks(8) {
let mut byte = 0u8;
for i in 0..8 {
// Black pixels are encoded as 1s
if let Some(&v) = byte_bits.get(i) {
if v == V::default() {
byte |= 1u8 << (7 - i)
}
}
}
line_buffer.push(byte)
}
self.0.write_all(line_buffer.as_slice())?;
line_buffer.clear();
}
self.0.flush()
}
}
impl<'a> FlatSamples<'a> {
fn len(&self) -> usize {
match *self {
FlatSamples::U8(arr) => arr.len(),
FlatSamples::U16(arr) => arr.len(),
}
}
fn all_smaller(&self, max_val: u32) -> bool {
match *self {
FlatSamples::U8(arr) => arr.iter().any(|&val| u32::from(val) > max_val),
FlatSamples::U16(arr) => arr.iter().any(|&val| u32::from(val) > max_val),
}
}
fn encoding_for(&self, header: &HeaderRecord) -> TupleEncoding<'a> {
match *header {
HeaderRecord::Bitmap(BitmapHeader {
encoding: SampleEncoding::Binary,
width,
..
}) => TupleEncoding::PbmBits {
samples: *self,
width,
},
HeaderRecord::Bitmap(BitmapHeader {
encoding: SampleEncoding::Ascii,
..
}) => TupleEncoding::Ascii { samples: *self },
HeaderRecord::Arbitrary(_) => TupleEncoding::Bytes { samples: *self },
HeaderRecord::Graymap(GraymapHeader {
encoding: SampleEncoding::Ascii,
..
})
| HeaderRecord::Pixmap(PixmapHeader {
encoding: SampleEncoding::Ascii,
..
}) => TupleEncoding::Ascii { samples: *self },
HeaderRecord::Graymap(GraymapHeader {
encoding: SampleEncoding::Binary,
..
})
| HeaderRecord::Pixmap(PixmapHeader {
encoding: SampleEncoding::Binary,
..
}) => TupleEncoding::Bytes { samples: *self },
}
}
}
impl<'a> From<&'a [u8]> for FlatSamples<'a> {
fn from(samples: &'a [u8]) -> Self {
FlatSamples::U8(samples)
}
}
impl<'a> From<&'a [u16]> for FlatSamples<'a> {
fn from(samples: &'a [u16]) -> Self {
FlatSamples::U16(samples)
}
}
impl<'a> TupleEncoding<'a> {
fn write_image(&self, writer: &mut dyn Write) -> ImageResult<()> {
match *self {
TupleEncoding::PbmBits {
samples: FlatSamples::U8(samples),
width,
} => SampleWriter(writer)
.write_pbm_bits(samples, width)
.map_err(ImageError::IoError),
TupleEncoding::PbmBits {
samples: FlatSamples::U16(samples),
width,
} => SampleWriter(writer)
.write_pbm_bits(samples, width)
.map_err(ImageError::IoError),
TupleEncoding::Bytes {
samples: FlatSamples::U8(samples),
} => writer.write_all(samples).map_err(ImageError::IoError),
TupleEncoding::Bytes {
samples: FlatSamples::U16(samples),
} => samples.iter().try_for_each(|&sample| {
writer
.write_u16::<BigEndian>(sample)
.map_err(ImageError::IoError)
}),
TupleEncoding::Ascii {
samples: FlatSamples::U8(samples),
} => SampleWriter(writer)
.write_samples_ascii(samples.iter())
.map_err(ImageError::IoError),
TupleEncoding::Ascii {
samples: FlatSamples::U16(samples),
} => SampleWriter(writer)
.write_samples_ascii(samples.iter())
.map_err(ImageError::IoError),
}
}
}

354
vendor/image/src/codecs/pnm/header.rs vendored Normal file
View File

@@ -0,0 +1,354 @@
use std::{fmt, io};
/// The kind of encoding used to store sample values
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
pub enum SampleEncoding {
/// Samples are unsigned binary integers in big endian
Binary,
/// Samples are encoded as decimal ascii strings separated by whitespace
Ascii,
}
/// Denotes the category of the magic number
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
pub enum PnmSubtype {
/// Magic numbers P1 and P4
Bitmap(SampleEncoding),
/// Magic numbers P2 and P5
Graymap(SampleEncoding),
/// Magic numbers P3 and P6
Pixmap(SampleEncoding),
/// Magic number P7
ArbitraryMap,
}
/// Stores the complete header data of a file.
///
/// Internally, provides mechanisms for lossless reencoding. After reading a file with the decoder
/// it is possible to recover the header and construct an encoder. Using the encoder on the just
/// loaded image should result in a byte copy of the original file (for single image pnms without
/// additional trailing data).
pub struct PnmHeader {
pub(crate) decoded: HeaderRecord,
pub(crate) encoded: Option<Vec<u8>>,
}
pub(crate) enum HeaderRecord {
Bitmap(BitmapHeader),
Graymap(GraymapHeader),
Pixmap(PixmapHeader),
Arbitrary(ArbitraryHeader),
}
/// Header produced by a `pbm` file ("Portable Bit Map")
#[derive(Clone, Copy, Debug)]
pub struct BitmapHeader {
/// Binary or Ascii encoded file
pub encoding: SampleEncoding,
/// Height of the image file
pub height: u32,
/// Width of the image file
pub width: u32,
}
/// Header produced by a `pgm` file ("Portable Gray Map")
#[derive(Clone, Copy, Debug)]
pub struct GraymapHeader {
/// Binary or Ascii encoded file
pub encoding: SampleEncoding,
/// Height of the image file
pub height: u32,
/// Width of the image file
pub width: u32,
/// Maximum sample value within the image
pub maxwhite: u32,
}
/// Header produced by a `ppm` file ("Portable Pixel Map")
#[derive(Clone, Copy, Debug)]
pub struct PixmapHeader {
/// Binary or Ascii encoded file
pub encoding: SampleEncoding,
/// Height of the image file
pub height: u32,
/// Width of the image file
pub width: u32,
/// Maximum sample value within the image
pub maxval: u32,
}
/// Header produced by a `pam` file ("Portable Arbitrary Map")
#[derive(Clone, Debug)]
pub struct ArbitraryHeader {
/// Height of the image file
pub height: u32,
/// Width of the image file
pub width: u32,
/// Number of color channels
pub depth: u32,
/// Maximum sample value within the image
pub maxval: u32,
/// Color interpretation of image pixels
pub tupltype: Option<ArbitraryTuplType>,
}
/// Standardized tuple type specifiers in the header of a `pam`.
#[derive(Clone, Debug)]
pub enum ArbitraryTuplType {
/// Pixels are either black (0) or white (1)
BlackAndWhite,
/// Pixels are either black (0) or white (1) and a second alpha channel
BlackAndWhiteAlpha,
/// Pixels represent the amount of white
Grayscale,
/// Grayscale with an additional alpha channel
GrayscaleAlpha,
/// Three channels: Red, Green, Blue
RGB,
/// Four channels: Red, Green, Blue, Alpha
RGBAlpha,
/// An image format which is not standardized
Custom(String),
}
impl ArbitraryTuplType {
pub(crate) fn name(&self) -> &str {
match self {
ArbitraryTuplType::BlackAndWhite => "BLACKANDWHITE",
ArbitraryTuplType::BlackAndWhiteAlpha => "BLACKANDWHITE_ALPHA",
ArbitraryTuplType::Grayscale => "GRAYSCALE",
ArbitraryTuplType::GrayscaleAlpha => "GRAYSCALE_ALPHA",
ArbitraryTuplType::RGB => "RGB",
ArbitraryTuplType::RGBAlpha => "RGB_ALPHA",
ArbitraryTuplType::Custom(custom) => custom,
}
}
}
impl PnmSubtype {
/// Get the two magic constant bytes corresponding to this format subtype.
pub fn magic_constant(self) -> &'static [u8; 2] {
match self {
PnmSubtype::Bitmap(SampleEncoding::Ascii) => b"P1",
PnmSubtype::Graymap(SampleEncoding::Ascii) => b"P2",
PnmSubtype::Pixmap(SampleEncoding::Ascii) => b"P3",
PnmSubtype::Bitmap(SampleEncoding::Binary) => b"P4",
PnmSubtype::Graymap(SampleEncoding::Binary) => b"P5",
PnmSubtype::Pixmap(SampleEncoding::Binary) => b"P6",
PnmSubtype::ArbitraryMap => b"P7",
}
}
/// Whether samples are stored as binary or as decimal ascii
pub fn sample_encoding(self) -> SampleEncoding {
match self {
PnmSubtype::ArbitraryMap => SampleEncoding::Binary,
PnmSubtype::Bitmap(enc) => enc,
PnmSubtype::Graymap(enc) => enc,
PnmSubtype::Pixmap(enc) => enc,
}
}
}
impl PnmHeader {
/// Retrieve the format subtype from which the header was created.
pub fn subtype(&self) -> PnmSubtype {
match self.decoded {
HeaderRecord::Bitmap(BitmapHeader { encoding, .. }) => PnmSubtype::Bitmap(encoding),
HeaderRecord::Graymap(GraymapHeader { encoding, .. }) => PnmSubtype::Graymap(encoding),
HeaderRecord::Pixmap(PixmapHeader { encoding, .. }) => PnmSubtype::Pixmap(encoding),
HeaderRecord::Arbitrary(ArbitraryHeader { .. }) => PnmSubtype::ArbitraryMap,
}
}
/// The width of the image this header is for.
pub fn width(&self) -> u32 {
match self.decoded {
HeaderRecord::Bitmap(BitmapHeader { width, .. }) => width,
HeaderRecord::Graymap(GraymapHeader { width, .. }) => width,
HeaderRecord::Pixmap(PixmapHeader { width, .. }) => width,
HeaderRecord::Arbitrary(ArbitraryHeader { width, .. }) => width,
}
}
/// The height of the image this header is for.
pub fn height(&self) -> u32 {
match self.decoded {
HeaderRecord::Bitmap(BitmapHeader { height, .. }) => height,
HeaderRecord::Graymap(GraymapHeader { height, .. }) => height,
HeaderRecord::Pixmap(PixmapHeader { height, .. }) => height,
HeaderRecord::Arbitrary(ArbitraryHeader { height, .. }) => height,
}
}
/// The biggest value a sample can have. In other words, the colour resolution.
pub fn maximal_sample(&self) -> u32 {
match self.decoded {
HeaderRecord::Bitmap(BitmapHeader { .. }) => 1,
HeaderRecord::Graymap(GraymapHeader { maxwhite, .. }) => maxwhite,
HeaderRecord::Pixmap(PixmapHeader { maxval, .. }) => maxval,
HeaderRecord::Arbitrary(ArbitraryHeader { maxval, .. }) => maxval,
}
}
/// Retrieve the underlying bitmap header if any
pub fn as_bitmap(&self) -> Option<&BitmapHeader> {
match self.decoded {
HeaderRecord::Bitmap(ref bitmap) => Some(bitmap),
_ => None,
}
}
/// Retrieve the underlying graymap header if any
pub fn as_graymap(&self) -> Option<&GraymapHeader> {
match self.decoded {
HeaderRecord::Graymap(ref graymap) => Some(graymap),
_ => None,
}
}
/// Retrieve the underlying pixmap header if any
pub fn as_pixmap(&self) -> Option<&PixmapHeader> {
match self.decoded {
HeaderRecord::Pixmap(ref pixmap) => Some(pixmap),
_ => None,
}
}
/// Retrieve the underlying arbitrary header if any
pub fn as_arbitrary(&self) -> Option<&ArbitraryHeader> {
match self.decoded {
HeaderRecord::Arbitrary(ref arbitrary) => Some(arbitrary),
_ => None,
}
}
/// Write the header back into a binary stream
pub fn write(&self, writer: &mut dyn io::Write) -> io::Result<()> {
writer.write_all(self.subtype().magic_constant())?;
match *self {
PnmHeader {
encoded: Some(ref content),
..
} => writer.write_all(content),
PnmHeader {
decoded:
HeaderRecord::Bitmap(BitmapHeader {
encoding: _encoding,
width,
height,
}),
..
} => writeln!(writer, "\n{} {}", width, height),
PnmHeader {
decoded:
HeaderRecord::Graymap(GraymapHeader {
encoding: _encoding,
width,
height,
maxwhite,
}),
..
} => writeln!(writer, "\n{} {} {}", width, height, maxwhite),
PnmHeader {
decoded:
HeaderRecord::Pixmap(PixmapHeader {
encoding: _encoding,
width,
height,
maxval,
}),
..
} => writeln!(writer, "\n{} {} {}", width, height, maxval),
PnmHeader {
decoded:
HeaderRecord::Arbitrary(ArbitraryHeader {
width,
height,
depth,
maxval,
ref tupltype,
}),
..
} => {
struct TupltypeWriter<'a>(&'a Option<ArbitraryTuplType>);
impl<'a> fmt::Display for TupltypeWriter<'a> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self.0 {
Some(tt) => writeln!(f, "TUPLTYPE {}", tt.name()),
None => Ok(()),
}
}
}
writeln!(
writer,
"\nWIDTH {}\nHEIGHT {}\nDEPTH {}\nMAXVAL {}\n{}ENDHDR",
width,
height,
depth,
maxval,
TupltypeWriter(tupltype)
)
}
}
}
}
impl From<BitmapHeader> for PnmHeader {
fn from(header: BitmapHeader) -> Self {
PnmHeader {
decoded: HeaderRecord::Bitmap(header),
encoded: None,
}
}
}
impl From<GraymapHeader> for PnmHeader {
fn from(header: GraymapHeader) -> Self {
PnmHeader {
decoded: HeaderRecord::Graymap(header),
encoded: None,
}
}
}
impl From<PixmapHeader> for PnmHeader {
fn from(header: PixmapHeader) -> Self {
PnmHeader {
decoded: HeaderRecord::Pixmap(header),
encoded: None,
}
}
}
impl From<ArbitraryHeader> for PnmHeader {
fn from(header: ArbitraryHeader) -> Self {
PnmHeader {
decoded: HeaderRecord::Arbitrary(header),
encoded: None,
}
}
}

184
vendor/image/src/codecs/pnm/mod.rs vendored Normal file
View File

@@ -0,0 +1,184 @@
//! Decoding of netpbm image formats (pbm, pgm, ppm and pam).
//!
//! The formats pbm, pgm and ppm are fully supported. The pam decoder recognizes the tuple types
//! `BLACKANDWHITE`, `GRAYSCALE` and `RGB` and explicitly recognizes but rejects their `_ALPHA`
//! variants for now as alpha color types are unsupported.
use self::autobreak::AutoBreak;
pub use self::decoder::PnmDecoder;
pub use self::encoder::PnmEncoder;
use self::header::HeaderRecord;
pub use self::header::{
ArbitraryHeader, ArbitraryTuplType, BitmapHeader, GraymapHeader, PixmapHeader,
};
pub use self::header::{PnmHeader, PnmSubtype, SampleEncoding};
mod autobreak;
mod decoder;
mod encoder;
mod header;
#[cfg(test)]
mod tests {
use super::*;
use crate::color::ColorType;
use crate::image::ImageDecoder;
use byteorder::{ByteOrder, NativeEndian};
fn execute_roundtrip_default(buffer: &[u8], width: u32, height: u32, color: ColorType) {
let mut encoded_buffer = Vec::new();
{
let mut encoder = PnmEncoder::new(&mut encoded_buffer);
encoder
.encode(buffer, width, height, color)
.expect("Failed to encode the image buffer");
}
let (header, loaded_color, loaded_image) = {
let decoder = PnmDecoder::new(&encoded_buffer[..]).unwrap();
let color_type = decoder.color_type();
let mut image = vec![0; decoder.total_bytes() as usize];
decoder
.read_image(&mut image)
.expect("Failed to decode the image");
let (_, header) = PnmDecoder::new(&encoded_buffer[..]).unwrap().into_inner();
(header, color_type, image)
};
assert_eq!(header.width(), width);
assert_eq!(header.height(), height);
assert_eq!(loaded_color, color);
assert_eq!(loaded_image.as_slice(), buffer);
}
fn execute_roundtrip_with_subtype(
buffer: &[u8],
width: u32,
height: u32,
color: ColorType,
subtype: PnmSubtype,
) {
let mut encoded_buffer = Vec::new();
{
let mut encoder = PnmEncoder::new(&mut encoded_buffer).with_subtype(subtype);
encoder
.encode(buffer, width, height, color)
.expect("Failed to encode the image buffer");
}
let (header, loaded_color, loaded_image) = {
let decoder = PnmDecoder::new(&encoded_buffer[..]).unwrap();
let color_type = decoder.color_type();
let mut image = vec![0; decoder.total_bytes() as usize];
decoder
.read_image(&mut image)
.expect("Failed to decode the image");
let (_, header) = PnmDecoder::new(&encoded_buffer[..]).unwrap().into_inner();
(header, color_type, image)
};
assert_eq!(header.width(), width);
assert_eq!(header.height(), height);
assert_eq!(header.subtype(), subtype);
assert_eq!(loaded_color, color);
assert_eq!(loaded_image.as_slice(), buffer);
}
fn execute_roundtrip_u16(buffer: &[u16], width: u32, height: u32, color: ColorType) {
let mut encoded_buffer = Vec::new();
{
let mut encoder = PnmEncoder::new(&mut encoded_buffer);
encoder
.encode(buffer, width, height, color)
.expect("Failed to encode the image buffer");
}
let (header, loaded_color, loaded_image) = {
let decoder = PnmDecoder::new(&encoded_buffer[..]).unwrap();
let color_type = decoder.color_type();
let mut image = vec![0; decoder.total_bytes() as usize];
decoder
.read_image(&mut image)
.expect("Failed to decode the image");
let (_, header) = PnmDecoder::new(&encoded_buffer[..]).unwrap().into_inner();
(header, color_type, image)
};
let mut buffer_u8 = vec![0; buffer.len() * 2];
NativeEndian::write_u16_into(buffer, &mut buffer_u8[..]);
assert_eq!(header.width(), width);
assert_eq!(header.height(), height);
assert_eq!(loaded_color, color);
assert_eq!(loaded_image, buffer_u8);
}
#[test]
fn roundtrip_gray() {
#[rustfmt::skip]
let buf: [u8; 16] = [
0, 0, 0, 255,
255, 255, 255, 255,
255, 0, 255, 0,
255, 0, 0, 0,
];
execute_roundtrip_default(&buf, 4, 4, ColorType::L8);
execute_roundtrip_with_subtype(&buf, 4, 4, ColorType::L8, PnmSubtype::ArbitraryMap);
execute_roundtrip_with_subtype(
&buf,
4,
4,
ColorType::L8,
PnmSubtype::Graymap(SampleEncoding::Ascii),
);
execute_roundtrip_with_subtype(
&buf,
4,
4,
ColorType::L8,
PnmSubtype::Graymap(SampleEncoding::Binary),
);
}
#[test]
fn roundtrip_rgb() {
#[rustfmt::skip]
let buf: [u8; 27] = [
0, 0, 0,
0, 0, 255,
0, 255, 0,
0, 255, 255,
255, 0, 0,
255, 0, 255,
255, 255, 0,
255, 255, 255,
255, 255, 255,
];
execute_roundtrip_default(&buf, 3, 3, ColorType::Rgb8);
execute_roundtrip_with_subtype(&buf, 3, 3, ColorType::Rgb8, PnmSubtype::ArbitraryMap);
execute_roundtrip_with_subtype(
&buf,
3,
3,
ColorType::Rgb8,
PnmSubtype::Pixmap(SampleEncoding::Binary),
);
execute_roundtrip_with_subtype(
&buf,
3,
3,
ColorType::Rgb8,
PnmSubtype::Pixmap(SampleEncoding::Ascii),
);
}
#[test]
fn roundtrip_u16() {
let buf: [u16; 6] = [0, 1, 0xFFFF, 0x1234, 0x3412, 0xBEAF];
execute_roundtrip_u16(&buf, 6, 1, ColorType::L16);
}
}

104
vendor/image/src/codecs/qoi.rs vendored Normal file
View File

@@ -0,0 +1,104 @@
//! Decoding and encoding of QOI images
use crate::{
error::{DecodingError, EncodingError},
ColorType, ImageDecoder, ImageEncoder, ImageError, ImageFormat, ImageResult,
};
use std::io::{Cursor, Read, Write};
/// QOI decoder
pub struct QoiDecoder<R> {
decoder: qoi::Decoder<R>,
}
impl<R> QoiDecoder<R>
where
R: Read,
{
/// Creates a new decoder that decodes from the stream ```reader```
pub fn new(reader: R) -> ImageResult<Self> {
let decoder = qoi::Decoder::from_stream(reader).map_err(decoding_error)?;
Ok(Self { decoder })
}
}
impl<'a, R: Read + 'a> ImageDecoder<'a> for QoiDecoder<R> {
type Reader = Cursor<Vec<u8>>;
fn dimensions(&self) -> (u32, u32) {
(self.decoder.header().width, self.decoder.header().height)
}
fn color_type(&self) -> ColorType {
match self.decoder.header().channels {
qoi::Channels::Rgb => ColorType::Rgb8,
qoi::Channels::Rgba => ColorType::Rgba8,
}
}
fn into_reader(mut self) -> ImageResult<Self::Reader> {
let buffer = self.decoder.decode_to_vec().map_err(decoding_error)?;
Ok(Cursor::new(buffer))
}
}
fn decoding_error(error: qoi::Error) -> ImageError {
ImageError::Decoding(DecodingError::new(ImageFormat::Qoi.into(), error))
}
fn encoding_error(error: qoi::Error) -> ImageError {
ImageError::Encoding(EncodingError::new(ImageFormat::Qoi.into(), error))
}
/// QOI encoder
pub struct QoiEncoder<W> {
writer: W,
}
impl<W: Write> QoiEncoder<W> {
/// Creates a new encoder that writes its output to ```writer```
pub fn new(writer: W) -> Self {
Self { writer }
}
}
impl<W: Write> ImageEncoder for QoiEncoder<W> {
fn write_image(
mut self,
buf: &[u8],
width: u32,
height: u32,
color_type: ColorType,
) -> ImageResult<()> {
if !matches!(color_type, ColorType::Rgba8 | ColorType::Rgb8) {
return Err(ImageError::Encoding(EncodingError::new(
ImageFormat::Qoi.into(),
format!("unsupported color type {color_type:?}. Supported are Rgba8 and Rgb8."),
)));
}
// Encode data in QOI
let data = qoi::encode_to_vec(buf, width, height).map_err(encoding_error)?;
// Write data to buffer
self.writer.write_all(&data[..])?;
self.writer.flush()?;
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::fs::File;
#[test]
fn decode_test_image() {
let decoder = QoiDecoder::new(File::open("tests/images/qoi/basic-test.qoi").unwrap())
.expect("Unable to read QOI file");
assert_eq!((5, 5), decoder.dimensions());
assert_eq!(ColorType::Rgba8, decoder.color_type());
}
}

502
vendor/image/src/codecs/tga/decoder.rs vendored Normal file
View File

@@ -0,0 +1,502 @@
use super::header::{Header, ImageType, ALPHA_BIT_MASK, SCREEN_ORIGIN_BIT_MASK};
use crate::{
color::{ColorType, ExtendedColorType},
error::{
ImageError, ImageResult, LimitError, LimitErrorKind, UnsupportedError, UnsupportedErrorKind,
},
image::{ImageDecoder, ImageFormat, ImageReadBuffer},
};
use byteorder::ReadBytesExt;
use std::{
convert::TryFrom,
io::{self, Read, Seek},
mem,
};
struct ColorMap {
/// sizes in bytes
start_offset: usize,
entry_size: usize,
bytes: Vec<u8>,
}
impl ColorMap {
pub(crate) fn from_reader(
r: &mut dyn Read,
start_offset: u16,
num_entries: u16,
bits_per_entry: u8,
) -> ImageResult<ColorMap> {
let bytes_per_entry = (bits_per_entry as usize + 7) / 8;
let mut bytes = vec![0; bytes_per_entry * num_entries as usize];
r.read_exact(&mut bytes)?;
Ok(ColorMap {
entry_size: bytes_per_entry,
start_offset: start_offset as usize,
bytes,
})
}
/// Get one entry from the color map
pub(crate) fn get(&self, index: usize) -> Option<&[u8]> {
let entry = self.start_offset + self.entry_size * index;
self.bytes.get(entry..entry + self.entry_size)
}
}
/// The representation of a TGA decoder
pub struct TgaDecoder<R> {
r: R,
width: usize,
height: usize,
bytes_per_pixel: usize,
has_loaded_metadata: bool,
image_type: ImageType,
color_type: ColorType,
original_color_type: Option<ExtendedColorType>,
header: Header,
color_map: Option<ColorMap>,
// Used in read_scanline
line_read: Option<usize>,
line_remain_buff: Vec<u8>,
}
impl<R: Read + Seek> TgaDecoder<R> {
/// Create a new decoder that decodes from the stream `r`
pub fn new(r: R) -> ImageResult<TgaDecoder<R>> {
let mut decoder = TgaDecoder {
r,
width: 0,
height: 0,
bytes_per_pixel: 0,
has_loaded_metadata: false,
image_type: ImageType::Unknown,
color_type: ColorType::L8,
original_color_type: None,
header: Header::default(),
color_map: None,
line_read: None,
line_remain_buff: Vec::new(),
};
decoder.read_metadata()?;
Ok(decoder)
}
fn read_header(&mut self) -> ImageResult<()> {
self.header = Header::from_reader(&mut self.r)?;
self.image_type = ImageType::new(self.header.image_type);
self.width = self.header.image_width as usize;
self.height = self.header.image_height as usize;
self.bytes_per_pixel = (self.header.pixel_depth as usize + 7) / 8;
Ok(())
}
fn read_metadata(&mut self) -> ImageResult<()> {
if !self.has_loaded_metadata {
self.read_header()?;
self.read_image_id()?;
self.read_color_map()?;
self.read_color_information()?;
self.has_loaded_metadata = true;
}
Ok(())
}
/// Loads the color information for the decoder
///
/// To keep things simple, we won't handle bit depths that aren't divisible
/// by 8 and are larger than 32.
fn read_color_information(&mut self) -> ImageResult<()> {
if self.header.pixel_depth % 8 != 0 || self.header.pixel_depth > 32 {
// Bit depth must be divisible by 8, and must be less than or equal
// to 32.
return Err(ImageError::Unsupported(
UnsupportedError::from_format_and_kind(
ImageFormat::Tga.into(),
UnsupportedErrorKind::Color(ExtendedColorType::Unknown(
self.header.pixel_depth,
)),
),
));
}
let num_alpha_bits = self.header.image_desc & ALPHA_BIT_MASK;
let other_channel_bits = if self.header.map_type != 0 {
self.header.map_entry_size
} else {
if num_alpha_bits > self.header.pixel_depth {
return Err(ImageError::Unsupported(
UnsupportedError::from_format_and_kind(
ImageFormat::Tga.into(),
UnsupportedErrorKind::Color(ExtendedColorType::Unknown(
self.header.pixel_depth,
)),
),
));
}
self.header.pixel_depth - num_alpha_bits
};
let color = self.image_type.is_color();
match (num_alpha_bits, other_channel_bits, color) {
// really, the encoding is BGR and BGRA, this is fixed
// up with `TgaDecoder::reverse_encoding`.
(0, 32, true) => self.color_type = ColorType::Rgba8,
(8, 24, true) => self.color_type = ColorType::Rgba8,
(0, 24, true) => self.color_type = ColorType::Rgb8,
(8, 8, false) => self.color_type = ColorType::La8,
(0, 8, false) => self.color_type = ColorType::L8,
(8, 0, false) => {
// alpha-only image is treated as L8
self.color_type = ColorType::L8;
self.original_color_type = Some(ExtendedColorType::A8);
}
_ => {
return Err(ImageError::Unsupported(
UnsupportedError::from_format_and_kind(
ImageFormat::Tga.into(),
UnsupportedErrorKind::Color(ExtendedColorType::Unknown(
self.header.pixel_depth,
)),
),
))
}
}
Ok(())
}
/// Read the image id field
///
/// We're not interested in this field, so this function skips it if it
/// is present
fn read_image_id(&mut self) -> ImageResult<()> {
self.r
.seek(io::SeekFrom::Current(i64::from(self.header.id_length)))?;
Ok(())
}
fn read_color_map(&mut self) -> ImageResult<()> {
if self.header.map_type == 1 {
// FIXME: we could reverse the map entries, which avoids having to reverse all pixels
// in the final output individually.
self.color_map = Some(ColorMap::from_reader(
&mut self.r,
self.header.map_origin,
self.header.map_length,
self.header.map_entry_size,
)?);
}
Ok(())
}
/// Expands indices into its mapped color
fn expand_color_map(&self, pixel_data: &[u8]) -> io::Result<Vec<u8>> {
#[inline]
fn bytes_to_index(bytes: &[u8]) -> usize {
let mut result = 0usize;
for byte in bytes.iter() {
result = result << 8 | *byte as usize;
}
result
}
let bytes_per_entry = (self.header.map_entry_size as usize + 7) / 8;
let mut result = Vec::with_capacity(self.width * self.height * bytes_per_entry);
if self.bytes_per_pixel == 0 {
return Err(io::ErrorKind::Other.into());
}
let color_map = self
.color_map
.as_ref()
.ok_or_else(|| io::Error::from(io::ErrorKind::Other))?;
for chunk in pixel_data.chunks(self.bytes_per_pixel) {
let index = bytes_to_index(chunk);
if let Some(color) = color_map.get(index) {
result.extend_from_slice(color);
} else {
return Err(io::ErrorKind::Other.into());
}
}
Ok(result)
}
/// Reads a run length encoded data for given number of bytes
fn read_encoded_data(&mut self, num_bytes: usize) -> io::Result<Vec<u8>> {
let mut pixel_data = Vec::with_capacity(num_bytes);
let mut repeat_buf = Vec::with_capacity(self.bytes_per_pixel);
while pixel_data.len() < num_bytes {
let run_packet = self.r.read_u8()?;
// If the highest bit in `run_packet` is set, then we repeat pixels
//
// Note: the TGA format adds 1 to both counts because having a count
// of 0 would be pointless.
if (run_packet & 0x80) != 0 {
// high bit set, so we will repeat the data
let repeat_count = ((run_packet & !0x80) + 1) as usize;
self.r
.by_ref()
.take(self.bytes_per_pixel as u64)
.read_to_end(&mut repeat_buf)?;
// get the repeating pixels from the bytes of the pixel stored in `repeat_buf`
let data = repeat_buf
.iter()
.cycle()
.take(repeat_count * self.bytes_per_pixel);
pixel_data.extend(data);
repeat_buf.clear();
} else {
// not set, so `run_packet+1` is the number of non-encoded pixels
let num_raw_bytes = (run_packet + 1) as usize * self.bytes_per_pixel;
self.r
.by_ref()
.take(num_raw_bytes as u64)
.read_to_end(&mut pixel_data)?;
}
}
if pixel_data.len() > num_bytes {
// FIXME: the last packet contained more data than we asked for!
// This is at least a warning. We truncate the data since some methods rely on the
// length to be accurate in the success case.
pixel_data.truncate(num_bytes);
}
Ok(pixel_data)
}
/// Reads a run length encoded packet
fn read_all_encoded_data(&mut self) -> ImageResult<Vec<u8>> {
let num_bytes = self.width * self.height * self.bytes_per_pixel;
Ok(self.read_encoded_data(num_bytes)?)
}
/// Reads a run length encoded line
fn read_encoded_line(&mut self) -> io::Result<Vec<u8>> {
let line_num_bytes = self.width * self.bytes_per_pixel;
let remain_len = self.line_remain_buff.len();
if remain_len >= line_num_bytes {
// `Vec::split_to` if std had it
let bytes = {
let bytes_after = self.line_remain_buff.split_off(line_num_bytes);
mem::replace(&mut self.line_remain_buff, bytes_after)
};
return Ok(bytes);
}
let num_bytes = line_num_bytes - remain_len;
let line_data = self.read_encoded_data(num_bytes)?;
let mut pixel_data = Vec::with_capacity(line_num_bytes);
pixel_data.append(&mut self.line_remain_buff);
pixel_data.extend_from_slice(&line_data[..num_bytes]);
// put the remain data to line_remain_buff.
// expects `self.line_remain_buff` to be empty from
// the above `pixel_data.append` call
debug_assert!(self.line_remain_buff.is_empty());
self.line_remain_buff
.extend_from_slice(&line_data[num_bytes..]);
Ok(pixel_data)
}
/// Reverse from BGR encoding to RGB encoding
///
/// TGA files are stored in the BGRA encoding. This function swaps
/// the blue and red bytes in the `pixels` array.
fn reverse_encoding_in_output(&mut self, pixels: &mut [u8]) {
// We only need to reverse the encoding of color images
match self.color_type {
ColorType::Rgb8 | ColorType::Rgba8 => {
for chunk in pixels.chunks_mut(self.color_type.bytes_per_pixel().into()) {
chunk.swap(0, 2);
}
}
_ => {}
}
}
/// Flip the image vertically depending on the screen origin bit
///
/// The bit in position 5 of the image descriptor byte is the screen origin bit.
/// If it's 1, the origin is in the top left corner.
/// If it's 0, the origin is in the bottom left corner.
/// This function checks the bit, and if it's 0, flips the image vertically.
fn flip_vertically(&mut self, pixels: &mut [u8]) {
if self.is_flipped_vertically() {
if self.height == 0 {
return;
}
let num_bytes = pixels.len();
let width_bytes = num_bytes / self.height;
// Flip the image vertically.
for vertical_index in 0..(self.height / 2) {
let vertical_target = (self.height - vertical_index) * width_bytes - width_bytes;
for horizontal_index in 0..width_bytes {
let source = vertical_index * width_bytes + horizontal_index;
let target = vertical_target + horizontal_index;
pixels.swap(target, source);
}
}
}
}
/// Check whether the image is vertically flipped
///
/// The bit in position 5 of the image descriptor byte is the screen origin bit.
/// If it's 1, the origin is in the top left corner.
/// If it's 0, the origin is in the bottom left corner.
/// This function checks the bit, and if it's 0, flips the image vertically.
fn is_flipped_vertically(&self) -> bool {
let screen_origin_bit = SCREEN_ORIGIN_BIT_MASK & self.header.image_desc != 0;
!screen_origin_bit
}
fn read_scanline(&mut self, buf: &mut [u8]) -> io::Result<usize> {
if let Some(line_read) = self.line_read {
if line_read == self.height {
return Ok(0);
}
}
// read the pixels from the data region
let mut pixel_data = if self.image_type.is_encoded() {
self.read_encoded_line()?
} else {
let num_raw_bytes = self.width * self.bytes_per_pixel;
let mut buf = vec![0; num_raw_bytes];
self.r.by_ref().read_exact(&mut buf)?;
buf
};
// expand the indices using the color map if necessary
if self.image_type.is_color_mapped() {
pixel_data = self.expand_color_map(&pixel_data)?;
}
self.reverse_encoding_in_output(&mut pixel_data);
// copy to the output buffer
buf[..pixel_data.len()].copy_from_slice(&pixel_data);
self.line_read = Some(self.line_read.unwrap_or(0) + 1);
Ok(pixel_data.len())
}
}
impl<'a, R: 'a + Read + Seek> ImageDecoder<'a> for TgaDecoder<R> {
type Reader = TGAReader<R>;
fn dimensions(&self) -> (u32, u32) {
(self.width as u32, self.height as u32)
}
fn color_type(&self) -> ColorType {
self.color_type
}
fn original_color_type(&self) -> ExtendedColorType {
self.original_color_type
.unwrap_or_else(|| self.color_type().into())
}
fn scanline_bytes(&self) -> u64 {
// This cannot overflow because TGA has a maximum width of u16::MAX_VALUE and
// `bytes_per_pixel` is a u8.
u64::from(self.color_type.bytes_per_pixel()) * self.width as u64
}
fn into_reader(self) -> ImageResult<Self::Reader> {
Ok(TGAReader {
buffer: ImageReadBuffer::new(self.scanline_bytes(), self.total_bytes()),
decoder: self,
})
}
fn read_image(mut self, buf: &mut [u8]) -> ImageResult<()> {
assert_eq!(u64::try_from(buf.len()), Ok(self.total_bytes()));
// In indexed images, we might need more bytes than pixels to read them. That's nonsensical
// to encode but we'll not want to crash.
let mut fallback_buf = vec![];
// read the pixels from the data region
let rawbuf = if self.image_type.is_encoded() {
let pixel_data = self.read_all_encoded_data()?;
if self.bytes_per_pixel <= usize::from(self.color_type.bytes_per_pixel()) {
buf[..pixel_data.len()].copy_from_slice(&pixel_data);
&buf[..pixel_data.len()]
} else {
fallback_buf = pixel_data;
&fallback_buf[..]
}
} else {
let num_raw_bytes = self.width * self.height * self.bytes_per_pixel;
if self.bytes_per_pixel <= usize::from(self.color_type.bytes_per_pixel()) {
self.r.by_ref().read_exact(&mut buf[..num_raw_bytes])?;
&buf[..num_raw_bytes]
} else {
fallback_buf.resize(num_raw_bytes, 0u8);
self.r
.by_ref()
.read_exact(&mut fallback_buf[..num_raw_bytes])?;
&fallback_buf[..num_raw_bytes]
}
};
// expand the indices using the color map if necessary
if self.image_type.is_color_mapped() {
let pixel_data = self.expand_color_map(rawbuf)?;
// not enough data to fill the buffer, or would overflow the buffer
if pixel_data.len() != buf.len() {
return Err(ImageError::Limits(LimitError::from_kind(
LimitErrorKind::DimensionError,
)));
}
buf.copy_from_slice(&pixel_data);
}
self.reverse_encoding_in_output(buf);
self.flip_vertically(buf);
Ok(())
}
}
pub struct TGAReader<R> {
buffer: ImageReadBuffer,
decoder: TgaDecoder<R>,
}
impl<R: Read + Seek> Read for TGAReader<R> {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
let decoder = &mut self.decoder;
self.buffer.read(buf, |buf| decoder.read_scanline(buf))
}
}

215
vendor/image/src/codecs/tga/encoder.rs vendored Normal file
View File

@@ -0,0 +1,215 @@
use super::header::Header;
use crate::{error::EncodingError, ColorType, ImageEncoder, ImageError, ImageFormat, ImageResult};
use std::{convert::TryFrom, error, fmt, io::Write};
/// Errors that can occur during encoding and saving of a TGA image.
#[derive(Debug, Copy, Clone, Hash, PartialEq, Eq, PartialOrd, Ord)]
enum EncoderError {
/// Invalid TGA width.
WidthInvalid(u32),
/// Invalid TGA height.
HeightInvalid(u32),
}
impl fmt::Display for EncoderError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
EncoderError::WidthInvalid(s) => f.write_fmt(format_args!("Invalid TGA width: {}", s)),
EncoderError::HeightInvalid(s) => {
f.write_fmt(format_args!("Invalid TGA height: {}", s))
}
}
}
}
impl From<EncoderError> for ImageError {
fn from(e: EncoderError) -> ImageError {
ImageError::Encoding(EncodingError::new(ImageFormat::Tga.into(), e))
}
}
impl error::Error for EncoderError {}
/// TGA encoder.
pub struct TgaEncoder<W: Write> {
writer: W,
}
impl<W: Write> TgaEncoder<W> {
/// Create a new encoder that writes its output to ```w```.
pub fn new(w: W) -> TgaEncoder<W> {
TgaEncoder { writer: w }
}
/// Encodes the image ```buf``` that has dimensions ```width```
/// and ```height``` and ```ColorType``` ```color_type```.
///
/// The dimensions of the image must be between 0 and 65535 (inclusive) or
/// an error will be returned.
pub fn encode(
mut self,
buf: &[u8],
width: u32,
height: u32,
color_type: ColorType,
) -> ImageResult<()> {
// Validate dimensions.
let width = u16::try_from(width)
.map_err(|_| ImageError::from(EncoderError::WidthInvalid(width)))?;
let height = u16::try_from(height)
.map_err(|_| ImageError::from(EncoderError::HeightInvalid(height)))?;
// Write out TGA header.
let header = Header::from_pixel_info(color_type, width, height)?;
header.write_to(&mut self.writer)?;
// Write out Bgr(a)8 or L(a)8 image data.
match color_type {
ColorType::Rgb8 | ColorType::Rgba8 => {
let mut image = Vec::from(buf);
for chunk in image.chunks_mut(usize::from(color_type.bytes_per_pixel())) {
chunk.swap(0, 2);
}
self.writer.write_all(&image)?;
}
_ => {
self.writer.write_all(buf)?;
}
}
Ok(())
}
}
impl<W: Write> ImageEncoder for TgaEncoder<W> {
fn write_image(
self,
buf: &[u8],
width: u32,
height: u32,
color_type: ColorType,
) -> ImageResult<()> {
self.encode(buf, width, height, color_type)
}
}
#[cfg(test)]
mod tests {
use super::{EncoderError, TgaEncoder};
use crate::{codecs::tga::TgaDecoder, ColorType, ImageDecoder, ImageError};
use std::{error::Error, io::Cursor};
fn round_trip_image(image: &[u8], width: u32, height: u32, c: ColorType) -> Vec<u8> {
let mut encoded_data = Vec::new();
{
let encoder = TgaEncoder::new(&mut encoded_data);
encoder
.encode(&image, width, height, c)
.expect("could not encode image");
}
let decoder = TgaDecoder::new(Cursor::new(&encoded_data)).expect("failed to decode");
let mut buf = vec![0; decoder.total_bytes() as usize];
decoder.read_image(&mut buf).expect("failed to decode");
buf
}
#[test]
fn test_image_width_too_large() {
// TGA cannot encode images larger than 65,535×65,535
// create a 65,536×1 8-bit black image buffer
let size = usize::from(u16::MAX) + 1;
let dimension = size as u32;
let img = vec![0u8; size];
// Try to encode an image that is too large
let mut encoded = Vec::new();
let encoder = TgaEncoder::new(&mut encoded);
let result = encoder.encode(&img, dimension, 1, ColorType::L8);
match result {
Err(ImageError::Encoding(err)) => {
let err = err
.source()
.unwrap()
.downcast_ref::<EncoderError>()
.unwrap();
assert_eq!(*err, EncoderError::WidthInvalid(dimension));
}
other => panic!(
"Encoding an image that is too wide should return a InvalidWidth \
it returned {:?} instead",
other
),
}
}
#[test]
fn test_image_height_too_large() {
// TGA cannot encode images larger than 65,535×65,535
// create a 65,536×1 8-bit black image buffer
let size = usize::from(u16::MAX) + 1;
let dimension = size as u32;
let img = vec![0u8; size];
// Try to encode an image that is too large
let mut encoded = Vec::new();
let encoder = TgaEncoder::new(&mut encoded);
let result = encoder.encode(&img, 1, dimension, ColorType::L8);
match result {
Err(ImageError::Encoding(err)) => {
let err = err
.source()
.unwrap()
.downcast_ref::<EncoderError>()
.unwrap();
assert_eq!(*err, EncoderError::HeightInvalid(dimension));
}
other => panic!(
"Encoding an image that is too tall should return a InvalidHeight \
it returned {:?} instead",
other
),
}
}
#[test]
fn round_trip_single_pixel_rgb() {
let image = [0, 1, 2];
let decoded = round_trip_image(&image, 1, 1, ColorType::Rgb8);
assert_eq!(decoded.len(), image.len());
assert_eq!(decoded.as_slice(), image);
}
#[test]
fn round_trip_single_pixel_rgba() {
let image = [0, 1, 2, 3];
let decoded = round_trip_image(&image, 1, 1, ColorType::Rgba8);
assert_eq!(decoded.len(), image.len());
assert_eq!(decoded.as_slice(), image);
}
#[test]
fn round_trip_gray() {
let image = [0, 1, 2];
let decoded = round_trip_image(&image, 3, 1, ColorType::L8);
assert_eq!(decoded.len(), image.len());
assert_eq!(decoded.as_slice(), image);
}
#[test]
fn round_trip_graya() {
let image = [0, 1, 2, 3, 4, 5];
let decoded = round_trip_image(&image, 1, 3, ColorType::La8);
assert_eq!(decoded.len(), image.len());
assert_eq!(decoded.as_slice(), image);
}
#[test]
fn round_trip_3px_rgb() {
let image = [0; 3 * 3 * 3]; // 3x3 pixels, 3 bytes per pixel
let _decoded = round_trip_image(&image, 3, 3, ColorType::Rgb8);
}
}

150
vendor/image/src/codecs/tga/header.rs vendored Normal file
View File

@@ -0,0 +1,150 @@
use crate::{
error::{UnsupportedError, UnsupportedErrorKind},
ColorType, ImageError, ImageFormat, ImageResult,
};
use byteorder::{LittleEndian, ReadBytesExt, WriteBytesExt};
use std::io::{Read, Write};
pub(crate) const ALPHA_BIT_MASK: u8 = 0b1111;
pub(crate) const SCREEN_ORIGIN_BIT_MASK: u8 = 0b10_0000;
pub(crate) enum ImageType {
NoImageData = 0,
/// Uncompressed images.
RawColorMap = 1,
RawTrueColor = 2,
RawGrayScale = 3,
/// Run length encoded images.
RunColorMap = 9,
RunTrueColor = 10,
RunGrayScale = 11,
Unknown,
}
impl ImageType {
/// Create a new image type from a u8.
pub(crate) fn new(img_type: u8) -> ImageType {
match img_type {
0 => ImageType::NoImageData,
1 => ImageType::RawColorMap,
2 => ImageType::RawTrueColor,
3 => ImageType::RawGrayScale,
9 => ImageType::RunColorMap,
10 => ImageType::RunTrueColor,
11 => ImageType::RunGrayScale,
_ => ImageType::Unknown,
}
}
/// Check if the image format uses colors as opposed to gray scale.
pub(crate) fn is_color(&self) -> bool {
matches! { *self,
ImageType::RawColorMap
| ImageType::RawTrueColor
| ImageType::RunTrueColor
| ImageType::RunColorMap
}
}
/// Does the image use a color map.
pub(crate) fn is_color_mapped(&self) -> bool {
matches! { *self, ImageType::RawColorMap | ImageType::RunColorMap }
}
/// Is the image run length encoded.
pub(crate) fn is_encoded(&self) -> bool {
matches! {*self, ImageType::RunColorMap | ImageType::RunTrueColor | ImageType::RunGrayScale }
}
}
/// Header used by TGA image files.
#[derive(Debug, Default)]
pub(crate) struct Header {
pub(crate) id_length: u8, // length of ID string
pub(crate) map_type: u8, // color map type
pub(crate) image_type: u8, // image type code
pub(crate) map_origin: u16, // starting index of map
pub(crate) map_length: u16, // length of map
pub(crate) map_entry_size: u8, // size of map entries in bits
pub(crate) x_origin: u16, // x-origin of image
pub(crate) y_origin: u16, // y-origin of image
pub(crate) image_width: u16, // width of image
pub(crate) image_height: u16, // height of image
pub(crate) pixel_depth: u8, // bits per pixel
pub(crate) image_desc: u8, // image descriptor
}
impl Header {
/// Load the header with values from pixel information.
pub(crate) fn from_pixel_info(
color_type: ColorType,
width: u16,
height: u16,
) -> ImageResult<Self> {
let mut header = Self::default();
if width > 0 && height > 0 {
let (num_alpha_bits, other_channel_bits, image_type) = match color_type {
ColorType::Rgba8 => (8, 24, ImageType::RawTrueColor),
ColorType::Rgb8 => (0, 24, ImageType::RawTrueColor),
ColorType::La8 => (8, 8, ImageType::RawGrayScale),
ColorType::L8 => (0, 8, ImageType::RawGrayScale),
_ => {
return Err(ImageError::Unsupported(
UnsupportedError::from_format_and_kind(
ImageFormat::Tga.into(),
UnsupportedErrorKind::Color(color_type.into()),
),
))
}
};
header.image_type = image_type as u8;
header.image_width = width;
header.image_height = height;
header.pixel_depth = num_alpha_bits + other_channel_bits;
header.image_desc = num_alpha_bits & ALPHA_BIT_MASK;
header.image_desc |= SCREEN_ORIGIN_BIT_MASK; // Upper left origin.
}
Ok(header)
}
/// Load the header with values from the reader.
pub(crate) fn from_reader(r: &mut dyn Read) -> ImageResult<Self> {
Ok(Self {
id_length: r.read_u8()?,
map_type: r.read_u8()?,
image_type: r.read_u8()?,
map_origin: r.read_u16::<LittleEndian>()?,
map_length: r.read_u16::<LittleEndian>()?,
map_entry_size: r.read_u8()?,
x_origin: r.read_u16::<LittleEndian>()?,
y_origin: r.read_u16::<LittleEndian>()?,
image_width: r.read_u16::<LittleEndian>()?,
image_height: r.read_u16::<LittleEndian>()?,
pixel_depth: r.read_u8()?,
image_desc: r.read_u8()?,
})
}
/// Write out the header values.
pub(crate) fn write_to(&self, w: &mut dyn Write) -> ImageResult<()> {
w.write_u8(self.id_length)?;
w.write_u8(self.map_type)?;
w.write_u8(self.image_type)?;
w.write_u16::<LittleEndian>(self.map_origin)?;
w.write_u16::<LittleEndian>(self.map_length)?;
w.write_u8(self.map_entry_size)?;
w.write_u16::<LittleEndian>(self.x_origin)?;
w.write_u16::<LittleEndian>(self.y_origin)?;
w.write_u16::<LittleEndian>(self.image_width)?;
w.write_u16::<LittleEndian>(self.image_height)?;
w.write_u8(self.pixel_depth)?;
w.write_u8(self.image_desc)?;
Ok(())
}
}

17
vendor/image/src/codecs/tga/mod.rs vendored Normal file
View File

@@ -0,0 +1,17 @@
//! Decoding of TGA Images
//!
//! # Related Links
//! <http://googlesites.inequation.org/tgautilities>
/// A decoder for TGA images
///
/// Currently this decoder does not support 8, 15 and 16 bit color images.
pub use self::decoder::TgaDecoder;
//TODO add 8, 15, 16 bit color support
pub use self::encoder::TgaEncoder;
mod decoder;
mod encoder;
mod header;

353
vendor/image/src/codecs/tiff.rs vendored Normal file
View File

@@ -0,0 +1,353 @@
//! Decoding and Encoding of TIFF Images
//!
//! TIFF (Tagged Image File Format) is a versatile image format that supports
//! lossless and lossy compression.
//!
//! # Related Links
//! * <http://partners.adobe.com/public/developer/tiff/index.html> - The TIFF specification
extern crate tiff;
use std::convert::TryFrom;
use std::io::{self, Cursor, Read, Seek, Write};
use std::marker::PhantomData;
use std::mem;
use crate::color::{ColorType, ExtendedColorType};
use crate::error::{
DecodingError, EncodingError, ImageError, ImageResult, LimitError, LimitErrorKind,
ParameterError, ParameterErrorKind, UnsupportedError, UnsupportedErrorKind,
};
use crate::image::{ImageDecoder, ImageEncoder, ImageFormat};
use crate::utils;
/// Decoder for TIFF images.
pub struct TiffDecoder<R>
where
R: Read + Seek,
{
dimensions: (u32, u32),
color_type: ColorType,
// We only use an Option here so we can call with_limits on the decoder without moving.
inner: Option<tiff::decoder::Decoder<R>>,
}
impl<R> TiffDecoder<R>
where
R: Read + Seek,
{
/// Create a new TiffDecoder.
pub fn new(r: R) -> Result<TiffDecoder<R>, ImageError> {
let mut inner = tiff::decoder::Decoder::new(r).map_err(ImageError::from_tiff_decode)?;
let dimensions = inner.dimensions().map_err(ImageError::from_tiff_decode)?;
let color_type = inner.colortype().map_err(ImageError::from_tiff_decode)?;
match inner.find_tag_unsigned_vec::<u16>(tiff::tags::Tag::SampleFormat) {
Ok(Some(sample_formats)) => {
for format in sample_formats {
check_sample_format(format)?;
}
}
Ok(None) => { /* assume UInt format */ }
Err(other) => return Err(ImageError::from_tiff_decode(other)),
};
let color_type = match color_type {
tiff::ColorType::Gray(8) => ColorType::L8,
tiff::ColorType::Gray(16) => ColorType::L16,
tiff::ColorType::GrayA(8) => ColorType::La8,
tiff::ColorType::GrayA(16) => ColorType::La16,
tiff::ColorType::RGB(8) => ColorType::Rgb8,
tiff::ColorType::RGB(16) => ColorType::Rgb16,
tiff::ColorType::RGBA(8) => ColorType::Rgba8,
tiff::ColorType::RGBA(16) => ColorType::Rgba16,
tiff::ColorType::Palette(n) | tiff::ColorType::Gray(n) => {
return Err(err_unknown_color_type(n))
}
tiff::ColorType::GrayA(n) => return Err(err_unknown_color_type(n.saturating_mul(2))),
tiff::ColorType::RGB(n) => return Err(err_unknown_color_type(n.saturating_mul(3))),
tiff::ColorType::YCbCr(n) => return Err(err_unknown_color_type(n.saturating_mul(3))),
tiff::ColorType::RGBA(n) | tiff::ColorType::CMYK(n) => {
return Err(err_unknown_color_type(n.saturating_mul(4)))
}
};
Ok(TiffDecoder {
dimensions,
color_type,
inner: Some(inner),
})
}
}
fn check_sample_format(sample_format: u16) -> Result<(), ImageError> {
match tiff::tags::SampleFormat::from_u16(sample_format) {
Some(tiff::tags::SampleFormat::Uint) => Ok(()),
Some(other) => Err(ImageError::Unsupported(
UnsupportedError::from_format_and_kind(
ImageFormat::Tiff.into(),
UnsupportedErrorKind::GenericFeature(format!(
"Unhandled TIFF sample format {:?}",
other
)),
),
)),
None => Err(ImageError::Decoding(DecodingError::from_format_hint(
ImageFormat::Tiff.into(),
))),
}
}
fn err_unknown_color_type(value: u8) -> ImageError {
ImageError::Unsupported(UnsupportedError::from_format_and_kind(
ImageFormat::Tiff.into(),
UnsupportedErrorKind::Color(ExtendedColorType::Unknown(value)),
))
}
impl ImageError {
fn from_tiff_decode(err: tiff::TiffError) -> ImageError {
match err {
tiff::TiffError::IoError(err) => ImageError::IoError(err),
err @ tiff::TiffError::FormatError(_)
| err @ tiff::TiffError::IntSizeError
| err @ tiff::TiffError::UsageError(_) => {
ImageError::Decoding(DecodingError::new(ImageFormat::Tiff.into(), err))
}
tiff::TiffError::UnsupportedError(desc) => {
ImageError::Unsupported(UnsupportedError::from_format_and_kind(
ImageFormat::Tiff.into(),
UnsupportedErrorKind::GenericFeature(desc.to_string()),
))
}
tiff::TiffError::LimitsExceeded => {
ImageError::Limits(LimitError::from_kind(LimitErrorKind::InsufficientMemory))
}
}
}
fn from_tiff_encode(err: tiff::TiffError) -> ImageError {
match err {
tiff::TiffError::IoError(err) => ImageError::IoError(err),
err @ tiff::TiffError::FormatError(_)
| err @ tiff::TiffError::IntSizeError
| err @ tiff::TiffError::UsageError(_) => {
ImageError::Encoding(EncodingError::new(ImageFormat::Tiff.into(), err))
}
tiff::TiffError::UnsupportedError(desc) => {
ImageError::Unsupported(UnsupportedError::from_format_and_kind(
ImageFormat::Tiff.into(),
UnsupportedErrorKind::GenericFeature(desc.to_string()),
))
}
tiff::TiffError::LimitsExceeded => {
ImageError::Limits(LimitError::from_kind(LimitErrorKind::InsufficientMemory))
}
}
}
}
/// Wrapper struct around a `Cursor<Vec<u8>>`
pub struct TiffReader<R>(Cursor<Vec<u8>>, PhantomData<R>);
impl<R> Read for TiffReader<R> {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
self.0.read(buf)
}
fn read_to_end(&mut self, buf: &mut Vec<u8>) -> io::Result<usize> {
if self.0.position() == 0 && buf.is_empty() {
mem::swap(buf, self.0.get_mut());
Ok(buf.len())
} else {
self.0.read_to_end(buf)
}
}
}
impl<'a, R: 'a + Read + Seek> ImageDecoder<'a> for TiffDecoder<R> {
type Reader = TiffReader<R>;
fn dimensions(&self) -> (u32, u32) {
self.dimensions
}
fn color_type(&self) -> ColorType {
self.color_type
}
fn icc_profile(&mut self) -> Option<Vec<u8>> {
if let Some(decoder) = &mut self.inner {
decoder.get_tag_u8_vec(tiff::tags::Tag::Unknown(34675)).ok()
} else {
None
}
}
fn set_limits(&mut self, limits: crate::io::Limits) -> ImageResult<()> {
limits.check_support(&crate::io::LimitSupport::default())?;
let (width, height) = self.dimensions();
limits.check_dimensions(width, height)?;
let max_alloc = limits.max_alloc.unwrap_or(u64::MAX);
let max_intermediate_alloc = max_alloc.saturating_sub(self.total_bytes());
let mut tiff_limits: tiff::decoder::Limits = Default::default();
tiff_limits.decoding_buffer_size =
usize::try_from(max_alloc - max_intermediate_alloc).unwrap_or(usize::MAX);
tiff_limits.intermediate_buffer_size =
usize::try_from(max_intermediate_alloc).unwrap_or(usize::MAX);
tiff_limits.ifd_value_size = tiff_limits.intermediate_buffer_size;
self.inner = Some(self.inner.take().unwrap().with_limits(tiff_limits));
Ok(())
}
fn into_reader(self) -> ImageResult<Self::Reader> {
let buf = match self
.inner
.unwrap()
.read_image()
.map_err(ImageError::from_tiff_decode)?
{
tiff::decoder::DecodingResult::U8(v) => v,
tiff::decoder::DecodingResult::U16(v) => utils::vec_copy_to_u8(&v),
tiff::decoder::DecodingResult::U32(v) => utils::vec_copy_to_u8(&v),
tiff::decoder::DecodingResult::U64(v) => utils::vec_copy_to_u8(&v),
tiff::decoder::DecodingResult::I8(v) => utils::vec_copy_to_u8(&v),
tiff::decoder::DecodingResult::I16(v) => utils::vec_copy_to_u8(&v),
tiff::decoder::DecodingResult::I32(v) => utils::vec_copy_to_u8(&v),
tiff::decoder::DecodingResult::I64(v) => utils::vec_copy_to_u8(&v),
tiff::decoder::DecodingResult::F32(v) => utils::vec_copy_to_u8(&v),
tiff::decoder::DecodingResult::F64(v) => utils::vec_copy_to_u8(&v),
};
Ok(TiffReader(Cursor::new(buf), PhantomData))
}
fn read_image(self, buf: &mut [u8]) -> ImageResult<()> {
assert_eq!(u64::try_from(buf.len()), Ok(self.total_bytes()));
match self
.inner
.unwrap()
.read_image()
.map_err(ImageError::from_tiff_decode)?
{
tiff::decoder::DecodingResult::U8(v) => {
buf.copy_from_slice(&v);
}
tiff::decoder::DecodingResult::U16(v) => {
buf.copy_from_slice(bytemuck::cast_slice(&v));
}
tiff::decoder::DecodingResult::U32(v) => {
buf.copy_from_slice(bytemuck::cast_slice(&v));
}
tiff::decoder::DecodingResult::U64(v) => {
buf.copy_from_slice(bytemuck::cast_slice(&v));
}
tiff::decoder::DecodingResult::I8(v) => {
buf.copy_from_slice(bytemuck::cast_slice(&v));
}
tiff::decoder::DecodingResult::I16(v) => {
buf.copy_from_slice(bytemuck::cast_slice(&v));
}
tiff::decoder::DecodingResult::I32(v) => {
buf.copy_from_slice(bytemuck::cast_slice(&v));
}
tiff::decoder::DecodingResult::I64(v) => {
buf.copy_from_slice(bytemuck::cast_slice(&v));
}
tiff::decoder::DecodingResult::F32(v) => {
buf.copy_from_slice(bytemuck::cast_slice(&v));
}
tiff::decoder::DecodingResult::F64(v) => {
buf.copy_from_slice(bytemuck::cast_slice(&v));
}
}
Ok(())
}
}
/// Encoder for tiff images
pub struct TiffEncoder<W> {
w: W,
}
// Utility to simplify and deduplicate error handling during 16-bit encoding.
fn u8_slice_as_u16(buf: &[u8]) -> ImageResult<&[u16]> {
bytemuck::try_cast_slice(buf).map_err(|err| {
// If the buffer is not aligned or the correct length for a u16 slice, err.
//
// `bytemuck::PodCastError` of bytemuck-1.2.0 does not implement
// `Error` and `Display` trait.
// See <https://github.com/Lokathor/bytemuck/issues/22>.
ImageError::Parameter(ParameterError::from_kind(ParameterErrorKind::Generic(
format!("{:?}", err),
)))
})
}
impl<W: Write + Seek> TiffEncoder<W> {
/// Create a new encoder that writes its output to `w`
pub fn new(w: W) -> TiffEncoder<W> {
TiffEncoder { w }
}
/// Encodes the image `image` that has dimensions `width` and `height` and `ColorType` `c`.
///
/// 16-bit types assume the buffer is native endian.
pub fn encode(self, data: &[u8], width: u32, height: u32, color: ColorType) -> ImageResult<()> {
let mut encoder =
tiff::encoder::TiffEncoder::new(self.w).map_err(ImageError::from_tiff_encode)?;
match color {
ColorType::L8 => {
encoder.write_image::<tiff::encoder::colortype::Gray8>(width, height, data)
}
ColorType::Rgb8 => {
encoder.write_image::<tiff::encoder::colortype::RGB8>(width, height, data)
}
ColorType::Rgba8 => {
encoder.write_image::<tiff::encoder::colortype::RGBA8>(width, height, data)
}
ColorType::L16 => encoder.write_image::<tiff::encoder::colortype::Gray16>(
width,
height,
u8_slice_as_u16(data)?,
),
ColorType::Rgb16 => encoder.write_image::<tiff::encoder::colortype::RGB16>(
width,
height,
u8_slice_as_u16(data)?,
),
ColorType::Rgba16 => encoder.write_image::<tiff::encoder::colortype::RGBA16>(
width,
height,
u8_slice_as_u16(data)?,
),
_ => {
return Err(ImageError::Unsupported(
UnsupportedError::from_format_and_kind(
ImageFormat::Tiff.into(),
UnsupportedErrorKind::Color(color.into()),
),
))
}
}
.map_err(ImageError::from_tiff_encode)?;
Ok(())
}
}
impl<W: Write + Seek> ImageEncoder for TiffEncoder<W> {
fn write_image(
self,
buf: &[u8],
width: u32,
height: u32,
color_type: ColorType,
) -> ImageResult<()> {
self.encode(buf, width, height, color_type)
}
}

399
vendor/image/src/codecs/webp/decoder.rs vendored Normal file
View File

@@ -0,0 +1,399 @@
use byteorder::{LittleEndian, ReadBytesExt};
use std::convert::TryFrom;
use std::io::{self, Cursor, Error, Read};
use std::marker::PhantomData;
use std::{error, fmt, mem};
use crate::error::{DecodingError, ImageError, ImageResult, ParameterError, ParameterErrorKind};
use crate::image::{ImageDecoder, ImageFormat};
use crate::{color, AnimationDecoder, Frames, Rgba};
use super::lossless::{LosslessDecoder, LosslessFrame};
use super::vp8::{Frame as VP8Frame, Vp8Decoder};
use super::extended::{read_extended_header, ExtendedImage};
/// All errors that can occur when attempting to parse a WEBP container
#[derive(Debug, Clone, Copy)]
pub(crate) enum DecoderError {
/// RIFF's "RIFF" signature not found or invalid
RiffSignatureInvalid([u8; 4]),
/// WebP's "WEBP" signature not found or invalid
WebpSignatureInvalid([u8; 4]),
/// Chunk Header was incorrect or invalid in its usage
ChunkHeaderInvalid([u8; 4]),
}
impl fmt::Display for DecoderError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
struct SignatureWriter([u8; 4]);
impl fmt::Display for SignatureWriter {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"[{:#04X?}, {:#04X?}, {:#04X?}, {:#04X?}]",
self.0[0], self.0[1], self.0[2], self.0[3]
)
}
}
match self {
DecoderError::RiffSignatureInvalid(riff) => f.write_fmt(format_args!(
"Invalid RIFF signature: {}",
SignatureWriter(*riff)
)),
DecoderError::WebpSignatureInvalid(webp) => f.write_fmt(format_args!(
"Invalid WebP signature: {}",
SignatureWriter(*webp)
)),
DecoderError::ChunkHeaderInvalid(header) => f.write_fmt(format_args!(
"Invalid Chunk header: {}",
SignatureWriter(*header)
)),
}
}
}
impl From<DecoderError> for ImageError {
fn from(e: DecoderError) -> ImageError {
ImageError::Decoding(DecodingError::new(ImageFormat::WebP.into(), e))
}
}
impl error::Error for DecoderError {}
/// All possible RIFF chunks in a WebP image file
#[allow(clippy::upper_case_acronyms)]
#[derive(Debug, Clone, Copy, PartialEq)]
pub(crate) enum WebPRiffChunk {
RIFF,
WEBP,
VP8,
VP8L,
VP8X,
ANIM,
ANMF,
ALPH,
ICCP,
EXIF,
XMP,
}
impl WebPRiffChunk {
pub(crate) fn from_fourcc(chunk_fourcc: [u8; 4]) -> ImageResult<Self> {
match &chunk_fourcc {
b"RIFF" => Ok(Self::RIFF),
b"WEBP" => Ok(Self::WEBP),
b"VP8 " => Ok(Self::VP8),
b"VP8L" => Ok(Self::VP8L),
b"VP8X" => Ok(Self::VP8X),
b"ANIM" => Ok(Self::ANIM),
b"ANMF" => Ok(Self::ANMF),
b"ALPH" => Ok(Self::ALPH),
b"ICCP" => Ok(Self::ICCP),
b"EXIF" => Ok(Self::EXIF),
b"XMP " => Ok(Self::XMP),
_ => Err(DecoderError::ChunkHeaderInvalid(chunk_fourcc).into()),
}
}
pub(crate) fn to_fourcc(&self) -> [u8; 4] {
match self {
Self::RIFF => *b"RIFF",
Self::WEBP => *b"WEBP",
Self::VP8 => *b"VP8 ",
Self::VP8L => *b"VP8L",
Self::VP8X => *b"VP8X",
Self::ANIM => *b"ANIM",
Self::ANMF => *b"ANMF",
Self::ALPH => *b"ALPH",
Self::ICCP => *b"ICCP",
Self::EXIF => *b"EXIF",
Self::XMP => *b"XMP ",
}
}
}
enum WebPImage {
Lossy(VP8Frame),
Lossless(LosslessFrame),
Extended(ExtendedImage),
}
/// WebP Image format decoder. Currently only supports lossy RGB images or lossless RGBA images.
pub struct WebPDecoder<R> {
r: R,
image: WebPImage,
}
impl<R: Read> WebPDecoder<R> {
/// Create a new WebPDecoder from the Reader ```r```.
/// This function takes ownership of the Reader.
pub fn new(r: R) -> ImageResult<WebPDecoder<R>> {
let image = WebPImage::Lossy(Default::default());
let mut decoder = WebPDecoder { r, image };
decoder.read_data()?;
Ok(decoder)
}
//reads the 12 bytes of the WebP file header
fn read_riff_header(&mut self) -> ImageResult<u32> {
let mut riff = [0; 4];
self.r.read_exact(&mut riff)?;
if &riff != b"RIFF" {
return Err(DecoderError::RiffSignatureInvalid(riff).into());
}
let size = self.r.read_u32::<LittleEndian>()?;
let mut webp = [0; 4];
self.r.read_exact(&mut webp)?;
if &webp != b"WEBP" {
return Err(DecoderError::WebpSignatureInvalid(webp).into());
}
Ok(size)
}
//reads the chunk header, decodes the frame and returns the inner decoder
fn read_frame(&mut self) -> ImageResult<WebPImage> {
let chunk = read_chunk(&mut self.r)?;
match chunk {
Some((cursor, WebPRiffChunk::VP8)) => {
let mut vp8_decoder = Vp8Decoder::new(cursor);
let frame = vp8_decoder.decode_frame()?;
Ok(WebPImage::Lossy(frame.clone()))
}
Some((cursor, WebPRiffChunk::VP8L)) => {
let mut lossless_decoder = LosslessDecoder::new(cursor);
let frame = lossless_decoder.decode_frame()?;
Ok(WebPImage::Lossless(frame.clone()))
}
Some((mut cursor, WebPRiffChunk::VP8X)) => {
let info = read_extended_header(&mut cursor)?;
let image = ExtendedImage::read_extended_chunks(&mut self.r, info)?;
Ok(WebPImage::Extended(image))
}
None => Err(ImageError::IoError(Error::from(
io::ErrorKind::UnexpectedEof,
))),
Some((_, chunk)) => Err(DecoderError::ChunkHeaderInvalid(chunk.to_fourcc()).into()),
}
}
fn read_data(&mut self) -> ImageResult<()> {
let _size = self.read_riff_header()?;
let image = self.read_frame()?;
self.image = image;
Ok(())
}
/// Returns true if the image as described by the bitstream is animated.
pub fn has_animation(&self) -> bool {
match &self.image {
WebPImage::Lossy(_) => false,
WebPImage::Lossless(_) => false,
WebPImage::Extended(extended) => extended.has_animation(),
}
}
/// Sets the background color if the image is an extended and animated webp.
pub fn set_background_color(&mut self, color: Rgba<u8>) -> ImageResult<()> {
match &mut self.image {
WebPImage::Extended(image) => image.set_background_color(color),
_ => Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::Generic(
"Background color can only be set on animated webp".to_owned(),
),
))),
}
}
}
pub(crate) fn read_len_cursor<R>(r: &mut R) -> ImageResult<Cursor<Vec<u8>>>
where
R: Read,
{
let unpadded_len = u64::from(r.read_u32::<LittleEndian>()?);
// RIFF chunks containing an uneven number of bytes append
// an extra 0x00 at the end of the chunk
//
// The addition cannot overflow since we have a u64 that was created from a u32
let len = unpadded_len + (unpadded_len % 2);
let mut framedata = Vec::new();
r.by_ref().take(len).read_to_end(&mut framedata)?;
//remove padding byte
if unpadded_len % 2 == 1 {
framedata.pop();
}
Ok(io::Cursor::new(framedata))
}
/// Reads a chunk header FourCC
/// Returns None if and only if we hit end of file reading the four character code of the chunk
/// The inner error is `Err` if and only if the chunk header FourCC is present but unknown
pub(crate) fn read_fourcc<R: Read>(r: &mut R) -> ImageResult<Option<ImageResult<WebPRiffChunk>>> {
let mut chunk_fourcc = [0; 4];
let result = r.read_exact(&mut chunk_fourcc);
match result {
Ok(()) => {}
Err(err) => {
if err.kind() == io::ErrorKind::UnexpectedEof {
return Ok(None);
} else {
return Err(err.into());
}
}
}
let chunk = WebPRiffChunk::from_fourcc(chunk_fourcc);
Ok(Some(chunk))
}
/// Reads a chunk
/// Returns an error if the chunk header is not a valid webp header or some other reading error
/// Returns None if and only if we hit end of file reading the four character code of the chunk
pub(crate) fn read_chunk<R>(r: &mut R) -> ImageResult<Option<(Cursor<Vec<u8>>, WebPRiffChunk)>>
where
R: Read,
{
if let Some(chunk) = read_fourcc(r)? {
let chunk = chunk?;
let cursor = read_len_cursor(r)?;
Ok(Some((cursor, chunk)))
} else {
Ok(None)
}
}
/// Wrapper struct around a `Cursor<Vec<u8>>`
pub struct WebpReader<R>(Cursor<Vec<u8>>, PhantomData<R>);
impl<R> Read for WebpReader<R> {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
self.0.read(buf)
}
fn read_to_end(&mut self, buf: &mut Vec<u8>) -> io::Result<usize> {
if self.0.position() == 0 && buf.is_empty() {
mem::swap(buf, self.0.get_mut());
Ok(buf.len())
} else {
self.0.read_to_end(buf)
}
}
}
impl<'a, R: 'a + Read> ImageDecoder<'a> for WebPDecoder<R> {
type Reader = WebpReader<R>;
fn dimensions(&self) -> (u32, u32) {
match &self.image {
WebPImage::Lossy(vp8_frame) => {
(u32::from(vp8_frame.width), u32::from(vp8_frame.height))
}
WebPImage::Lossless(lossless_frame) => (
u32::from(lossless_frame.width),
u32::from(lossless_frame.height),
),
WebPImage::Extended(extended) => extended.dimensions(),
}
}
fn color_type(&self) -> color::ColorType {
match &self.image {
WebPImage::Lossy(_) => color::ColorType::Rgb8,
WebPImage::Lossless(_) => color::ColorType::Rgba8,
WebPImage::Extended(extended) => extended.color_type(),
}
}
fn into_reader(self) -> ImageResult<Self::Reader> {
match &self.image {
WebPImage::Lossy(vp8_frame) => {
let mut data = vec![0; vp8_frame.get_buf_size()];
vp8_frame.fill_rgb(data.as_mut_slice());
Ok(WebpReader(Cursor::new(data), PhantomData))
}
WebPImage::Lossless(lossless_frame) => {
let mut data = vec![0; lossless_frame.get_buf_size()];
lossless_frame.fill_rgba(data.as_mut_slice());
Ok(WebpReader(Cursor::new(data), PhantomData))
}
WebPImage::Extended(extended) => {
let mut data = vec![0; extended.get_buf_size()];
extended.fill_buf(data.as_mut_slice());
Ok(WebpReader(Cursor::new(data), PhantomData))
}
}
}
fn read_image(self, buf: &mut [u8]) -> ImageResult<()> {
assert_eq!(u64::try_from(buf.len()), Ok(self.total_bytes()));
match &self.image {
WebPImage::Lossy(vp8_frame) => {
vp8_frame.fill_rgb(buf);
}
WebPImage::Lossless(lossless_frame) => {
lossless_frame.fill_rgba(buf);
}
WebPImage::Extended(extended) => {
extended.fill_buf(buf);
}
}
Ok(())
}
fn icc_profile(&mut self) -> Option<Vec<u8>> {
if let WebPImage::Extended(extended) = &self.image {
extended.icc_profile()
} else {
None
}
}
}
impl<'a, R: 'a + Read> AnimationDecoder<'a> for WebPDecoder<R> {
fn into_frames(self) -> Frames<'a> {
match self.image {
WebPImage::Lossy(_) | WebPImage::Lossless(_) => {
Frames::new(Box::new(std::iter::empty()))
}
WebPImage::Extended(extended_image) => extended_image.into_frames(),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn add_with_overflow_size() {
let bytes = vec![
0x52, 0x49, 0x46, 0x46, 0xaf, 0x37, 0x80, 0x47, 0x57, 0x45, 0x42, 0x50, 0x6c, 0x64,
0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xfb, 0x7e, 0x73, 0x00, 0x06, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65,
0x40, 0xfb, 0xff, 0xff, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65,
0x00, 0x00, 0x00, 0x00, 0x62, 0x00, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x49,
0x49, 0x54, 0x55, 0x50, 0x4c, 0x54, 0x59, 0x50, 0x45, 0x33, 0x37, 0x44, 0x4d, 0x46,
];
let data = std::io::Cursor::new(bytes);
let _ = WebPDecoder::new(data);
}
}

242
vendor/image/src/codecs/webp/encoder.rs vendored Normal file
View File

@@ -0,0 +1,242 @@
//! Encoding of WebP images.
///
/// Uses the simple encoding API from the [libwebp] library.
///
/// [libwebp]: https://developers.google.com/speed/webp/docs/api#simple_encoding_api
use std::io::Write;
use libwebp::{Encoder, PixelLayout, WebPMemory};
use crate::error::{
EncodingError, ParameterError, ParameterErrorKind, UnsupportedError, UnsupportedErrorKind,
};
use crate::flat::SampleLayout;
use crate::{ColorType, ImageEncoder, ImageError, ImageFormat, ImageResult};
/// WebP Encoder.
pub struct WebPEncoder<W> {
inner: W,
quality: WebPQuality,
}
/// WebP encoder quality.
#[derive(Debug, Copy, Clone)]
pub struct WebPQuality(Quality);
#[derive(Debug, Copy, Clone)]
enum Quality {
Lossless,
Lossy(u8),
}
impl WebPQuality {
/// Minimum lossy quality value (0).
pub const MIN: u8 = 0;
/// Maximum lossy quality value (100).
pub const MAX: u8 = 100;
/// Default lossy quality (80), providing a balance of quality and file size.
pub const DEFAULT: u8 = 80;
/// Lossless encoding.
pub fn lossless() -> Self {
Self(Quality::Lossless)
}
/// Lossy encoding. 0 = low quality, small size; 100 = high quality, large size.
///
/// Values are clamped from 0 to 100.
pub fn lossy(quality: u8) -> Self {
Self(Quality::Lossy(quality.clamp(Self::MIN, Self::MAX)))
}
}
impl Default for WebPQuality {
fn default() -> Self {
Self::lossy(WebPQuality::DEFAULT)
}
}
impl<W: Write> WebPEncoder<W> {
/// Create a new encoder that writes its output to `w`.
///
/// Defaults to lossy encoding, see [`WebPQuality::DEFAULT`].
pub fn new(w: W) -> Self {
WebPEncoder::new_with_quality(w, WebPQuality::default())
}
/// Create a new encoder with the specified quality, that writes its output to `w`.
pub fn new_with_quality(w: W, quality: WebPQuality) -> Self {
Self { inner: w, quality }
}
/// Encode image data with the indicated color type.
///
/// The encoder requires image data be Rgb8 or Rgba8.
pub fn encode(
mut self,
data: &[u8],
width: u32,
height: u32,
color: ColorType,
) -> ImageResult<()> {
// TODO: convert color types internally?
let layout = match color {
ColorType::Rgb8 => PixelLayout::Rgb,
ColorType::Rgba8 => PixelLayout::Rgba,
_ => {
return Err(ImageError::Unsupported(
UnsupportedError::from_format_and_kind(
ImageFormat::WebP.into(),
UnsupportedErrorKind::Color(color.into()),
),
))
}
};
// Validate dimensions upfront to avoid panics.
if width == 0
|| height == 0
|| !SampleLayout::row_major_packed(color.channel_count(), width, height)
.fits(data.len())
{
return Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::DimensionMismatch,
)));
}
// Call the native libwebp library to encode the image.
let encoder = Encoder::new(data, layout, width, height);
let encoded: WebPMemory = match self.quality.0 {
Quality::Lossless => encoder.encode_lossless(),
Quality::Lossy(quality) => encoder.encode(quality as f32),
};
// The simple encoding API in libwebp does not return errors.
if encoded.is_empty() {
return Err(ImageError::Encoding(EncodingError::new(
ImageFormat::WebP.into(),
"encoding failed, output empty",
)));
}
self.inner.write_all(&encoded)?;
Ok(())
}
}
impl<W: Write> ImageEncoder for WebPEncoder<W> {
fn write_image(
self,
buf: &[u8],
width: u32,
height: u32,
color_type: ColorType,
) -> ImageResult<()> {
self.encode(buf, width, height, color_type)
}
}
#[cfg(test)]
mod tests {
use crate::codecs::webp::{WebPEncoder, WebPQuality};
use crate::{ColorType, ImageEncoder};
#[test]
fn webp_lossless_deterministic() {
// 1x1 8-bit image buffer containing a single red pixel.
let rgb: &[u8] = &[255, 0, 0];
let rgba: &[u8] = &[255, 0, 0, 128];
for (color, img, expected) in [
(
ColorType::Rgb8,
rgb,
[
82, 73, 70, 70, 28, 0, 0, 0, 87, 69, 66, 80, 86, 80, 56, 76, 15, 0, 0, 0, 47,
0, 0, 0, 0, 7, 16, 253, 143, 254, 7, 34, 162, 255, 1, 0,
],
),
(
ColorType::Rgba8,
rgba,
[
82, 73, 70, 70, 28, 0, 0, 0, 87, 69, 66, 80, 86, 80, 56, 76, 15, 0, 0, 0, 47,
0, 0, 0, 16, 7, 16, 253, 143, 2, 6, 34, 162, 255, 1, 0,
],
),
] {
// Encode it into a memory buffer.
let mut encoded_img = Vec::new();
{
let encoder =
WebPEncoder::new_with_quality(&mut encoded_img, WebPQuality::lossless());
encoder
.write_image(&img, 1, 1, color)
.expect("image encoding failed");
}
// WebP encoding should be deterministic.
assert_eq!(encoded_img, expected);
}
}
#[derive(Debug, Clone)]
struct MockImage {
width: u32,
height: u32,
color: ColorType,
data: Vec<u8>,
}
impl quickcheck::Arbitrary for MockImage {
fn arbitrary(g: &mut quickcheck::Gen) -> Self {
// Limit to small, non-empty images <= 512x512.
let width = u32::arbitrary(g) % 512 + 1;
let height = u32::arbitrary(g) % 512 + 1;
let (color, stride) = if bool::arbitrary(g) {
(ColorType::Rgb8, 3)
} else {
(ColorType::Rgba8, 4)
};
let size = width * height * stride;
let data: Vec<u8> = (0..size).map(|_| u8::arbitrary(g)).collect();
MockImage {
width,
height,
color,
data,
}
}
}
quickcheck! {
fn fuzz_webp_valid_image(image: MockImage, quality: u8) -> bool {
// Check valid images do not panic.
let mut buffer = Vec::<u8>::new();
for webp_quality in [WebPQuality::lossless(), WebPQuality::lossy(quality)] {
buffer.clear();
let encoder = WebPEncoder::new_with_quality(&mut buffer, webp_quality);
if !encoder
.write_image(&image.data, image.width, image.height, image.color)
.is_ok() {
return false;
}
}
true
}
fn fuzz_webp_no_panic(data: Vec<u8>, width: u8, height: u8, quality: u8) -> bool {
// Check random (usually invalid) parameters do not panic.
let mut buffer = Vec::<u8>::new();
for color in [ColorType::Rgb8, ColorType::Rgba8] {
for webp_quality in [WebPQuality::lossless(), WebPQuality::lossy(quality)] {
buffer.clear();
let encoder = WebPEncoder::new_with_quality(&mut buffer, webp_quality);
// Ignore errors.
let _ = encoder
.write_image(&data, width as u32, height as u32, color);
}
}
true
}
}
}

839
vendor/image/src/codecs/webp/extended.rs vendored Normal file
View File

@@ -0,0 +1,839 @@
use std::convert::TryInto;
use std::io::{self, Cursor, Error, Read};
use std::{error, fmt};
use super::decoder::{
read_chunk, read_fourcc, read_len_cursor, DecoderError::ChunkHeaderInvalid, WebPRiffChunk,
};
use super::lossless::{LosslessDecoder, LosslessFrame};
use super::vp8::{Frame as VP8Frame, Vp8Decoder};
use crate::error::{DecodingError, ParameterError, ParameterErrorKind};
use crate::image::ImageFormat;
use crate::{
ColorType, Delay, Frame, Frames, ImageError, ImageResult, Rgb, RgbImage, Rgba, RgbaImage,
};
use byteorder::{LittleEndian, ReadBytesExt};
//all errors that can occur while parsing extended chunks in a WebP file
#[derive(Debug, Clone, Copy)]
enum DecoderError {
// Some bits were invalid
InfoBitsInvalid { name: &'static str, value: u32 },
// Alpha chunk doesn't match the frame's size
AlphaChunkSizeMismatch,
// Image is too large, either for the platform's pointer size or generally
ImageTooLarge,
// Frame would go out of the canvas
FrameOutsideImage,
}
impl fmt::Display for DecoderError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
DecoderError::InfoBitsInvalid { name, value } => f.write_fmt(format_args!(
"Info bits `{}` invalid, received value: {}",
name, value
)),
DecoderError::AlphaChunkSizeMismatch => {
f.write_str("Alpha chunk doesn't match the size of the frame")
}
DecoderError::ImageTooLarge => f.write_str("Image is too large to be decoded"),
DecoderError::FrameOutsideImage => {
f.write_str("Frame is too large and would go outside the image")
}
}
}
}
impl From<DecoderError> for ImageError {
fn from(e: DecoderError) -> ImageError {
ImageError::Decoding(DecodingError::new(ImageFormat::WebP.into(), e))
}
}
impl error::Error for DecoderError {}
#[derive(Debug, Clone)]
pub(crate) struct WebPExtendedInfo {
_icc_profile: bool,
_alpha: bool,
_exif_metadata: bool,
_xmp_metadata: bool,
_animation: bool,
canvas_width: u32,
canvas_height: u32,
icc_profile: Option<Vec<u8>>,
}
#[derive(Debug)]
enum ExtendedImageData {
Animation {
frames: Vec<AnimatedFrame>,
anim_info: WebPAnimatedInfo,
},
Static(WebPStatic),
}
#[derive(Debug)]
pub(crate) struct ExtendedImage {
info: WebPExtendedInfo,
image: ExtendedImageData,
}
impl ExtendedImage {
pub(crate) fn dimensions(&self) -> (u32, u32) {
(self.info.canvas_width, self.info.canvas_height)
}
pub(crate) fn has_animation(&self) -> bool {
self.info._animation
}
pub(crate) fn icc_profile(&self) -> Option<Vec<u8>> {
self.info.icc_profile.clone()
}
pub(crate) fn color_type(&self) -> ColorType {
match &self.image {
ExtendedImageData::Animation { frames, .. } => &frames[0].image,
ExtendedImageData::Static(image) => image,
}
.color_type()
}
pub(crate) fn into_frames<'a>(self) -> Frames<'a> {
struct FrameIterator {
image: ExtendedImage,
index: usize,
canvas: RgbaImage,
}
impl Iterator for FrameIterator {
type Item = ImageResult<Frame>;
fn next(&mut self) -> Option<Self::Item> {
if let ExtendedImageData::Animation { frames, anim_info } = &self.image.image {
let frame = frames.get(self.index);
match frame {
Some(anim_image) => {
self.index += 1;
ExtendedImage::draw_subimage(
&mut self.canvas,
anim_image,
anim_info.background_color,
)
}
None => None,
}
} else {
None
}
}
}
let width = self.info.canvas_width;
let height = self.info.canvas_height;
let background_color =
if let ExtendedImageData::Animation { ref anim_info, .. } = self.image {
anim_info.background_color
} else {
Rgba([0, 0, 0, 0])
};
let frame_iter = FrameIterator {
image: self,
index: 0,
canvas: RgbaImage::from_pixel(width, height, background_color),
};
Frames::new(Box::new(frame_iter))
}
pub(crate) fn read_extended_chunks<R: Read>(
reader: &mut R,
mut info: WebPExtendedInfo,
) -> ImageResult<ExtendedImage> {
let mut anim_info: Option<WebPAnimatedInfo> = None;
let mut anim_frames: Vec<AnimatedFrame> = Vec::new();
let mut static_frame: Option<WebPStatic> = None;
//go until end of file and while chunk headers are valid
while let Some((mut cursor, chunk)) = read_extended_chunk(reader)? {
match chunk {
WebPRiffChunk::EXIF | WebPRiffChunk::XMP => {
//ignore these chunks
}
WebPRiffChunk::ANIM => {
if anim_info.is_none() {
anim_info = Some(Self::read_anim_info(&mut cursor)?);
}
}
WebPRiffChunk::ANMF => {
let frame = read_anim_frame(cursor, info.canvas_width, info.canvas_height)?;
anim_frames.push(frame);
}
WebPRiffChunk::ALPH => {
if static_frame.is_none() {
let alpha_chunk =
read_alpha_chunk(&mut cursor, info.canvas_width, info.canvas_height)?;
let vp8_frame = read_lossy_with_chunk(reader)?;
let img = WebPStatic::from_alpha_lossy(alpha_chunk, vp8_frame)?;
static_frame = Some(img);
}
}
WebPRiffChunk::ICCP => {
let mut icc_profile = Vec::new();
cursor.read_to_end(&mut icc_profile)?;
info.icc_profile = Some(icc_profile);
}
WebPRiffChunk::VP8 => {
if static_frame.is_none() {
let vp8_frame = read_lossy(cursor)?;
let img = WebPStatic::from_lossy(vp8_frame)?;
static_frame = Some(img);
}
}
WebPRiffChunk::VP8L => {
if static_frame.is_none() {
let mut lossless_decoder = LosslessDecoder::new(cursor);
let frame = lossless_decoder.decode_frame()?;
let image = WebPStatic::Lossless(frame.clone());
static_frame = Some(image);
}
}
_ => return Err(ChunkHeaderInvalid(chunk.to_fourcc()).into()),
}
}
let image = if let Some(info) = anim_info {
if anim_frames.is_empty() {
return Err(ImageError::IoError(Error::from(
io::ErrorKind::UnexpectedEof,
)));
}
ExtendedImageData::Animation {
frames: anim_frames,
anim_info: info,
}
} else if let Some(frame) = static_frame {
ExtendedImageData::Static(frame)
} else {
//reached end of file too early before image data was reached
return Err(ImageError::IoError(Error::from(
io::ErrorKind::UnexpectedEof,
)));
};
let image = ExtendedImage { image, info };
Ok(image)
}
fn read_anim_info<R: Read>(reader: &mut R) -> ImageResult<WebPAnimatedInfo> {
let mut colors: [u8; 4] = [0; 4];
reader.read_exact(&mut colors)?;
//background color is [blue, green, red, alpha]
let background_color = Rgba([colors[2], colors[1], colors[0], colors[3]]);
let loop_count = reader.read_u16::<LittleEndian>()?;
let info = WebPAnimatedInfo {
background_color,
_loop_count: loop_count,
};
Ok(info)
}
fn draw_subimage(
canvas: &mut RgbaImage,
anim_image: &AnimatedFrame,
background_color: Rgba<u8>,
) -> Option<ImageResult<Frame>> {
let mut buffer = vec![0; anim_image.image.get_buf_size()];
anim_image.image.fill_buf(&mut buffer);
let has_alpha = anim_image.image.has_alpha();
let pixel_len: u32 = anim_image.image.color_type().bytes_per_pixel().into();
'x: for x in 0..anim_image.width {
for y in 0..anim_image.height {
let canvas_index: (u32, u32) = (x + anim_image.offset_x, y + anim_image.offset_y);
// Negative offsets are not possible due to unsigned ints
// If we go out of bounds by height, still continue by x
if canvas_index.1 >= canvas.height() {
continue 'x;
}
// If we go out of bounds by width, it doesn't make sense to continue at all
if canvas_index.0 >= canvas.width() {
break 'x;
}
let index: usize = ((y * anim_image.width + x) * pixel_len).try_into().unwrap();
canvas[canvas_index] = if anim_image.use_alpha_blending && has_alpha {
let buffer: [u8; 4] = buffer[index..][..4].try_into().unwrap();
ExtendedImage::do_alpha_blending(buffer, canvas[canvas_index])
} else {
Rgba([
buffer[index],
buffer[index + 1],
buffer[index + 2],
if has_alpha { buffer[index + 3] } else { 255 },
])
};
}
}
let delay = Delay::from_numer_denom_ms(anim_image.duration, 1);
let img = canvas.clone();
let frame = Frame::from_parts(img, 0, 0, delay);
if anim_image.dispose {
for x in 0..anim_image.width {
for y in 0..anim_image.height {
let canvas_index = (x + anim_image.offset_x, y + anim_image.offset_y);
canvas[canvas_index] = background_color;
}
}
}
Some(Ok(frame))
}
fn do_alpha_blending(buffer: [u8; 4], canvas: Rgba<u8>) -> Rgba<u8> {
let canvas_alpha = f64::from(canvas[3]);
let buffer_alpha = f64::from(buffer[3]);
let blend_alpha_f64 = buffer_alpha + canvas_alpha * (1.0 - buffer_alpha / 255.0);
//value should be between 0 and 255, this truncates the fractional part
let blend_alpha: u8 = blend_alpha_f64 as u8;
let blend_rgb: [u8; 3] = if blend_alpha == 0 {
[0, 0, 0]
} else {
let mut rgb = [0u8; 3];
for i in 0..3 {
let canvas_f64 = f64::from(canvas[i]);
let buffer_f64 = f64::from(buffer[i]);
let val = (buffer_f64 * buffer_alpha
+ canvas_f64 * canvas_alpha * (1.0 - buffer_alpha / 255.0))
/ blend_alpha_f64;
//value should be between 0 and 255, this truncates the fractional part
rgb[i] = val as u8;
}
rgb
};
Rgba([blend_rgb[0], blend_rgb[1], blend_rgb[2], blend_alpha])
}
pub(crate) fn fill_buf(&self, buf: &mut [u8]) {
match &self.image {
// will always have at least one frame
ExtendedImageData::Animation { frames, anim_info } => {
let first_frame = &frames[0];
let (canvas_width, canvas_height) = self.dimensions();
if canvas_width == first_frame.width && canvas_height == first_frame.height {
first_frame.image.fill_buf(buf);
} else {
let bg_color = match &self.info._alpha {
true => Rgba::from([0, 0, 0, 0]),
false => anim_info.background_color,
};
let mut canvas = RgbaImage::from_pixel(canvas_width, canvas_height, bg_color);
let _ = ExtendedImage::draw_subimage(&mut canvas, first_frame, bg_color)
.unwrap()
.unwrap();
buf.copy_from_slice(canvas.into_raw().as_slice());
}
}
ExtendedImageData::Static(image) => {
image.fill_buf(buf);
}
}
}
pub(crate) fn get_buf_size(&self) -> usize {
match &self.image {
// will always have at least one frame
ExtendedImageData::Animation { frames, .. } => &frames[0].image,
ExtendedImageData::Static(image) => image,
}
.get_buf_size()
}
pub(crate) fn set_background_color(&mut self, color: Rgba<u8>) -> ImageResult<()> {
match &mut self.image {
ExtendedImageData::Animation { anim_info, .. } => {
anim_info.background_color = color;
Ok(())
}
_ => Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::Generic(
"Background color can only be set on animated webp".to_owned(),
),
))),
}
}
}
#[derive(Debug)]
enum WebPStatic {
LossyWithAlpha(RgbaImage),
LossyWithoutAlpha(RgbImage),
Lossless(LosslessFrame),
}
impl WebPStatic {
pub(crate) fn from_alpha_lossy(
alpha: AlphaChunk,
vp8_frame: VP8Frame,
) -> ImageResult<WebPStatic> {
if alpha.data.len() != usize::from(vp8_frame.width) * usize::from(vp8_frame.height) {
return Err(DecoderError::AlphaChunkSizeMismatch.into());
}
let size = usize::from(vp8_frame.width).checked_mul(usize::from(vp8_frame.height) * 4);
let mut image_vec = match size {
Some(size) => vec![0u8; size],
None => return Err(DecoderError::ImageTooLarge.into()),
};
vp8_frame.fill_rgba(&mut image_vec);
for y in 0..vp8_frame.height {
for x in 0..vp8_frame.width {
let predictor: u8 = WebPStatic::get_predictor(
x.into(),
y.into(),
vp8_frame.width.into(),
alpha.filtering_method,
&image_vec,
);
let predictor = u16::from(predictor);
let alpha_index = usize::from(y) * usize::from(vp8_frame.width) + usize::from(x);
let alpha_val = alpha.data[alpha_index];
let alpha: u8 = ((predictor + u16::from(alpha_val)) % 256)
.try_into()
.unwrap();
let alpha_index = alpha_index * 4 + 3;
image_vec[alpha_index] = alpha;
}
}
let image = RgbaImage::from_vec(vp8_frame.width.into(), vp8_frame.height.into(), image_vec)
.unwrap();
Ok(WebPStatic::LossyWithAlpha(image))
}
fn get_predictor(
x: usize,
y: usize,
width: usize,
filtering_method: FilteringMethod,
image_slice: &[u8],
) -> u8 {
match filtering_method {
FilteringMethod::None => 0,
FilteringMethod::Horizontal => {
if x == 0 && y == 0 {
0
} else if x == 0 {
let index = (y - 1) * width + x;
image_slice[index * 4 + 3]
} else {
let index = y * width + x - 1;
image_slice[index * 4 + 3]
}
}
FilteringMethod::Vertical => {
if x == 0 && y == 0 {
0
} else if y == 0 {
let index = y * width + x - 1;
image_slice[index * 4 + 3]
} else {
let index = (y - 1) * width + x;
image_slice[index * 4 + 3]
}
}
FilteringMethod::Gradient => {
let (left, top, top_left) = match (x, y) {
(0, 0) => (0, 0, 0),
(0, y) => {
let above_index = (y - 1) * width + x;
let val = image_slice[above_index * 4 + 3];
(val, val, val)
}
(x, 0) => {
let before_index = y * width + x - 1;
let val = image_slice[before_index * 4 + 3];
(val, val, val)
}
(x, y) => {
let left_index = y * width + x - 1;
let left = image_slice[left_index * 4 + 3];
let top_index = (y - 1) * width + x;
let top = image_slice[top_index * 4 + 3];
let top_left_index = (y - 1) * width + x - 1;
let top_left = image_slice[top_left_index * 4 + 3];
(left, top, top_left)
}
};
let combination = i16::from(left) + i16::from(top) - i16::from(top_left);
i16::clamp(combination, 0, 255).try_into().unwrap()
}
}
}
pub(crate) fn from_lossy(vp8_frame: VP8Frame) -> ImageResult<WebPStatic> {
let mut image = RgbImage::from_pixel(
vp8_frame.width.into(),
vp8_frame.height.into(),
Rgb([0, 0, 0]),
);
vp8_frame.fill_rgb(&mut image);
Ok(WebPStatic::LossyWithoutAlpha(image))
}
pub(crate) fn fill_buf(&self, buf: &mut [u8]) {
match self {
WebPStatic::LossyWithAlpha(image) => {
buf.copy_from_slice(image);
}
WebPStatic::LossyWithoutAlpha(image) => {
buf.copy_from_slice(image);
}
WebPStatic::Lossless(lossless) => {
lossless.fill_rgba(buf);
}
}
}
pub(crate) fn get_buf_size(&self) -> usize {
match self {
WebPStatic::LossyWithAlpha(rgb_image) => rgb_image.len(),
WebPStatic::LossyWithoutAlpha(rgba_image) => rgba_image.len(),
WebPStatic::Lossless(lossless) => lossless.get_buf_size(),
}
}
pub(crate) fn color_type(&self) -> ColorType {
if self.has_alpha() {
ColorType::Rgba8
} else {
ColorType::Rgb8
}
}
pub(crate) fn has_alpha(&self) -> bool {
match self {
Self::LossyWithAlpha(..) | Self::Lossless(..) => true,
Self::LossyWithoutAlpha(..) => false,
}
}
}
#[derive(Debug)]
struct WebPAnimatedInfo {
background_color: Rgba<u8>,
_loop_count: u16,
}
#[derive(Debug)]
struct AnimatedFrame {
offset_x: u32,
offset_y: u32,
width: u32,
height: u32,
duration: u32,
use_alpha_blending: bool,
dispose: bool,
image: WebPStatic,
}
/// Reads a chunk, but silently ignores unknown chunks at the end of a file
fn read_extended_chunk<R>(r: &mut R) -> ImageResult<Option<(Cursor<Vec<u8>>, WebPRiffChunk)>>
where
R: Read,
{
let mut unknown_chunk = Ok(());
while let Some(chunk) = read_fourcc(r)? {
let cursor = read_len_cursor(r)?;
match chunk {
Ok(chunk) => return unknown_chunk.and(Ok(Some((cursor, chunk)))),
Err(err) => unknown_chunk = unknown_chunk.and(Err(err)),
}
}
Ok(None)
}
pub(crate) fn read_extended_header<R: Read>(reader: &mut R) -> ImageResult<WebPExtendedInfo> {
let chunk_flags = reader.read_u8()?;
let reserved_first = chunk_flags & 0b11000000;
let icc_profile = chunk_flags & 0b00100000 != 0;
let alpha = chunk_flags & 0b00010000 != 0;
let exif_metadata = chunk_flags & 0b00001000 != 0;
let xmp_metadata = chunk_flags & 0b00000100 != 0;
let animation = chunk_flags & 0b00000010 != 0;
let reserved_second = chunk_flags & 0b00000001;
let reserved_third = read_3_bytes(reader)?;
if reserved_first != 0 || reserved_second != 0 || reserved_third != 0 {
let value: u32 = if reserved_first != 0 {
reserved_first.into()
} else if reserved_second != 0 {
reserved_second.into()
} else {
reserved_third
};
return Err(DecoderError::InfoBitsInvalid {
name: "reserved",
value,
}
.into());
}
let canvas_width = read_3_bytes(reader)? + 1;
let canvas_height = read_3_bytes(reader)? + 1;
//product of canvas dimensions cannot be larger than u32 max
if u32::checked_mul(canvas_width, canvas_height).is_none() {
return Err(DecoderError::ImageTooLarge.into());
}
let info = WebPExtendedInfo {
_icc_profile: icc_profile,
_alpha: alpha,
_exif_metadata: exif_metadata,
_xmp_metadata: xmp_metadata,
_animation: animation,
canvas_width,
canvas_height,
icc_profile: None,
};
Ok(info)
}
fn read_anim_frame<R: Read>(
mut reader: R,
canvas_width: u32,
canvas_height: u32,
) -> ImageResult<AnimatedFrame> {
//offsets for the frames are twice the values
let frame_x = read_3_bytes(&mut reader)? * 2;
let frame_y = read_3_bytes(&mut reader)? * 2;
let frame_width = read_3_bytes(&mut reader)? + 1;
let frame_height = read_3_bytes(&mut reader)? + 1;
if frame_x + frame_width > canvas_width || frame_y + frame_height > canvas_height {
return Err(DecoderError::FrameOutsideImage.into());
}
let duration = read_3_bytes(&mut reader)?;
let frame_info = reader.read_u8()?;
let reserved = frame_info & 0b11111100;
if reserved != 0 {
return Err(DecoderError::InfoBitsInvalid {
name: "reserved",
value: reserved.into(),
}
.into());
}
let use_alpha_blending = frame_info & 0b00000010 == 0;
let dispose = frame_info & 0b00000001 != 0;
//read normal bitstream now
let static_image = read_image(&mut reader, frame_width, frame_height)?;
let frame = AnimatedFrame {
offset_x: frame_x,
offset_y: frame_y,
width: frame_width,
height: frame_height,
duration,
use_alpha_blending,
dispose,
image: static_image,
};
Ok(frame)
}
fn read_3_bytes<R: Read>(reader: &mut R) -> ImageResult<u32> {
let mut buffer: [u8; 3] = [0; 3];
reader.read_exact(&mut buffer)?;
let value: u32 =
(u32::from(buffer[2]) << 16) | (u32::from(buffer[1]) << 8) | u32::from(buffer[0]);
Ok(value)
}
fn read_lossy_with_chunk<R: Read>(reader: &mut R) -> ImageResult<VP8Frame> {
let (cursor, chunk) =
read_chunk(reader)?.ok_or_else(|| Error::from(io::ErrorKind::UnexpectedEof))?;
if chunk != WebPRiffChunk::VP8 {
return Err(ChunkHeaderInvalid(chunk.to_fourcc()).into());
}
read_lossy(cursor)
}
fn read_lossy(cursor: Cursor<Vec<u8>>) -> ImageResult<VP8Frame> {
let mut vp8_decoder = Vp8Decoder::new(cursor);
let frame = vp8_decoder.decode_frame()?;
Ok(frame.clone())
}
fn read_image<R: Read>(reader: &mut R, width: u32, height: u32) -> ImageResult<WebPStatic> {
let chunk = read_chunk(reader)?;
match chunk {
Some((cursor, WebPRiffChunk::VP8)) => {
let mut vp8_decoder = Vp8Decoder::new(cursor);
let frame = vp8_decoder.decode_frame()?;
let img = WebPStatic::from_lossy(frame.clone())?;
Ok(img)
}
Some((cursor, WebPRiffChunk::VP8L)) => {
let mut lossless_decoder = LosslessDecoder::new(cursor);
let frame = lossless_decoder.decode_frame()?;
let img = WebPStatic::Lossless(frame.clone());
Ok(img)
}
Some((mut cursor, WebPRiffChunk::ALPH)) => {
let alpha_chunk = read_alpha_chunk(&mut cursor, width, height)?;
let vp8_frame = read_lossy_with_chunk(reader)?;
let img = WebPStatic::from_alpha_lossy(alpha_chunk, vp8_frame)?;
Ok(img)
}
None => Err(ImageError::IoError(Error::from(
io::ErrorKind::UnexpectedEof,
))),
Some((_, chunk)) => Err(ChunkHeaderInvalid(chunk.to_fourcc()).into()),
}
}
#[derive(Debug)]
struct AlphaChunk {
_preprocessing: bool,
filtering_method: FilteringMethod,
data: Vec<u8>,
}
#[derive(Debug, Copy, Clone)]
enum FilteringMethod {
None,
Horizontal,
Vertical,
Gradient,
}
fn read_alpha_chunk<R: Read>(reader: &mut R, width: u32, height: u32) -> ImageResult<AlphaChunk> {
let info_byte = reader.read_u8()?;
let reserved = info_byte & 0b11000000;
let preprocessing = (info_byte & 0b00110000) >> 4;
let filtering = (info_byte & 0b00001100) >> 2;
let compression = info_byte & 0b00000011;
if reserved != 0 {
return Err(DecoderError::InfoBitsInvalid {
name: "reserved",
value: reserved.into(),
}
.into());
}
let preprocessing = match preprocessing {
0 => false,
1 => true,
_ => {
return Err(DecoderError::InfoBitsInvalid {
name: "reserved",
value: preprocessing.into(),
}
.into())
}
};
let filtering_method = match filtering {
0 => FilteringMethod::None,
1 => FilteringMethod::Horizontal,
2 => FilteringMethod::Vertical,
3 => FilteringMethod::Gradient,
_ => unreachable!(),
};
let lossless_compression = match compression {
0 => false,
1 => true,
_ => {
return Err(DecoderError::InfoBitsInvalid {
name: "lossless compression",
value: compression.into(),
}
.into())
}
};
let mut framedata = Vec::new();
reader.read_to_end(&mut framedata)?;
let data = if lossless_compression {
let cursor = io::Cursor::new(framedata);
let mut decoder = LosslessDecoder::new(cursor);
//this is a potential problem for large images; would require rewriting lossless decoder to use u32 for width and height
let width: u16 = width
.try_into()
.map_err(|_| ImageError::from(DecoderError::ImageTooLarge))?;
let height: u16 = height
.try_into()
.map_err(|_| ImageError::from(DecoderError::ImageTooLarge))?;
let frame = decoder.decode_frame_implicit_dims(width, height)?;
let mut data = vec![0u8; usize::from(width) * usize::from(height)];
frame.fill_green(&mut data);
data
} else {
framedata
};
let chunk = AlphaChunk {
_preprocessing: preprocessing,
filtering_method,
data,
};
Ok(chunk)
}

202
vendor/image/src/codecs/webp/huffman.rs vendored Normal file
View File

@@ -0,0 +1,202 @@
use std::convert::TryInto;
use super::lossless::BitReader;
use super::lossless::DecoderError;
use crate::ImageResult;
/// Rudimentary utility for reading Canonical Huffman Codes.
/// Based off https://github.com/webmproject/libwebp/blob/7f8472a610b61ec780ef0a8873cd954ac512a505/src/utils/huffman.c
///
const MAX_ALLOWED_CODE_LENGTH: usize = 15;
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
enum HuffmanTreeNode {
Branch(usize), //offset in vector to children
Leaf(u16), //symbol stored in leaf
Empty,
}
/// Huffman tree
#[derive(Clone, Debug, Default)]
pub(crate) struct HuffmanTree {
tree: Vec<HuffmanTreeNode>,
max_nodes: usize,
num_nodes: usize,
}
impl HuffmanTree {
fn is_full(&self) -> bool {
self.num_nodes == self.max_nodes
}
/// Turns a node from empty into a branch and assigns its children
fn assign_children(&mut self, node_index: usize) -> usize {
let offset_index = self.num_nodes - node_index;
self.tree[node_index] = HuffmanTreeNode::Branch(offset_index);
self.num_nodes += 2;
offset_index
}
/// Init a huffman tree
fn init(num_leaves: usize) -> ImageResult<HuffmanTree> {
if num_leaves == 0 {
return Err(DecoderError::HuffmanError.into());
}
let max_nodes = 2 * num_leaves - 1;
let tree = vec![HuffmanTreeNode::Empty; max_nodes];
let num_nodes = 1;
let tree = HuffmanTree {
tree,
max_nodes,
num_nodes,
};
Ok(tree)
}
/// Converts code lengths to codes
fn code_lengths_to_codes(code_lengths: &[u16]) -> ImageResult<Vec<Option<u16>>> {
let max_code_length = *code_lengths
.iter()
.reduce(|a, b| if a >= b { a } else { b })
.unwrap();
if max_code_length > MAX_ALLOWED_CODE_LENGTH.try_into().unwrap() {
return Err(DecoderError::HuffmanError.into());
}
let mut code_length_hist = vec![0; MAX_ALLOWED_CODE_LENGTH + 1];
for &length in code_lengths.iter() {
code_length_hist[usize::from(length)] += 1;
}
code_length_hist[0] = 0;
let mut curr_code = 0;
let mut next_codes = vec![None; MAX_ALLOWED_CODE_LENGTH + 1];
for code_len in 1..=usize::from(max_code_length) {
curr_code = (curr_code + code_length_hist[code_len - 1]) << 1;
next_codes[code_len] = Some(curr_code);
}
let mut huff_codes = vec![None; code_lengths.len()];
for (symbol, &length) in code_lengths.iter().enumerate() {
let length = usize::from(length);
if length > 0 {
huff_codes[symbol] = next_codes[length];
if let Some(value) = next_codes[length].as_mut() {
*value += 1;
}
} else {
huff_codes[symbol] = None;
}
}
Ok(huff_codes)
}
/// Adds a symbol to a huffman tree
fn add_symbol(&mut self, symbol: u16, code: u16, code_length: u16) -> ImageResult<()> {
let mut node_index = 0;
let code = usize::from(code);
for length in (0..code_length).rev() {
if node_index >= self.max_nodes {
return Err(DecoderError::HuffmanError.into());
}
let node = self.tree[node_index];
let offset = match node {
HuffmanTreeNode::Empty => {
if self.is_full() {
return Err(DecoderError::HuffmanError.into());
}
self.assign_children(node_index)
}
HuffmanTreeNode::Leaf(_) => return Err(DecoderError::HuffmanError.into()),
HuffmanTreeNode::Branch(offset) => offset,
};
node_index += offset + ((code >> length) & 1);
}
match self.tree[node_index] {
HuffmanTreeNode::Empty => self.tree[node_index] = HuffmanTreeNode::Leaf(symbol),
HuffmanTreeNode::Leaf(_) => return Err(DecoderError::HuffmanError.into()),
HuffmanTreeNode::Branch(_offset) => return Err(DecoderError::HuffmanError.into()),
}
Ok(())
}
/// Builds a tree implicitly, just from code lengths
pub(crate) fn build_implicit(code_lengths: Vec<u16>) -> ImageResult<HuffmanTree> {
let mut num_symbols = 0;
let mut root_symbol = 0;
for (symbol, length) in code_lengths.iter().enumerate() {
if *length > 0 {
num_symbols += 1;
root_symbol = symbol.try_into().unwrap();
}
}
let mut tree = HuffmanTree::init(num_symbols)?;
if num_symbols == 1 {
tree.add_symbol(root_symbol, 0, 0)?;
} else {
let codes = HuffmanTree::code_lengths_to_codes(&code_lengths)?;
for (symbol, &length) in code_lengths.iter().enumerate() {
if length > 0 && codes[symbol].is_some() {
tree.add_symbol(symbol.try_into().unwrap(), codes[symbol].unwrap(), length)?;
}
}
}
Ok(tree)
}
/// Builds a tree explicitly from lengths, codes and symbols
pub(crate) fn build_explicit(
code_lengths: Vec<u16>,
codes: Vec<u16>,
symbols: Vec<u16>,
) -> ImageResult<HuffmanTree> {
let mut tree = HuffmanTree::init(symbols.len())?;
for i in 0..symbols.len() {
tree.add_symbol(symbols[i], codes[i], code_lengths[i])?;
}
Ok(tree)
}
/// Reads a symbol using the bitstream
pub(crate) fn read_symbol(&self, bit_reader: &mut BitReader) -> ImageResult<u16> {
let mut index = 0;
let mut node = self.tree[index];
while let HuffmanTreeNode::Branch(children_offset) = node {
index += children_offset + bit_reader.read_bits::<usize>(1)?;
node = self.tree[index];
}
let symbol = match node {
HuffmanTreeNode::Branch(_) => unreachable!(),
HuffmanTreeNode::Empty => return Err(DecoderError::HuffmanError.into()),
HuffmanTreeNode::Leaf(symbol) => symbol,
};
Ok(symbol)
}
}

View File

@@ -0,0 +1,147 @@
//! Does loop filtering on webp lossy images
use crate::utils::clamp;
#[inline]
fn c(val: i32) -> i32 {
clamp(val, -128, 127)
}
//unsigned to signed
#[inline]
fn u2s(val: u8) -> i32 {
i32::from(val) - 128
}
//signed to unsigned
#[inline]
fn s2u(val: i32) -> u8 {
(c(val) + 128) as u8
}
#[inline]
fn diff(val1: u8, val2: u8) -> u8 {
if val1 > val2 {
val1 - val2
} else {
val2 - val1
}
}
//15.2
fn common_adjust(use_outer_taps: bool, pixels: &mut [u8], point: usize, stride: usize) -> i32 {
let p1 = u2s(pixels[point - 2 * stride]);
let p0 = u2s(pixels[point - stride]);
let q0 = u2s(pixels[point]);
let q1 = u2s(pixels[point + stride]);
//value for the outer 2 pixels
let outer = if use_outer_taps { c(p1 - q1) } else { 0 };
let mut a = c(outer + 3 * (q0 - p0));
let b = (c(a + 3)) >> 3;
a = (c(a + 4)) >> 3;
pixels[point] = s2u(q0 - a);
pixels[point - stride] = s2u(p0 + b);
a
}
fn simple_threshold(filter_limit: i32, pixels: &[u8], point: usize, stride: usize) -> bool {
i32::from(diff(pixels[point - stride], pixels[point])) * 2
+ i32::from(diff(pixels[point - 2 * stride], pixels[point + stride])) / 2
<= filter_limit
}
fn should_filter(
interior_limit: u8,
edge_limit: u8,
pixels: &[u8],
point: usize,
stride: usize,
) -> bool {
simple_threshold(i32::from(edge_limit), pixels, point, stride)
&& diff(pixels[point - 4 * stride], pixels[point - 3 * stride]) <= interior_limit
&& diff(pixels[point - 3 * stride], pixels[point - 2 * stride]) <= interior_limit
&& diff(pixels[point - 2 * stride], pixels[point - stride]) <= interior_limit
&& diff(pixels[point + 3 * stride], pixels[point + 2 * stride]) <= interior_limit
&& diff(pixels[point + 2 * stride], pixels[point + stride]) <= interior_limit
&& diff(pixels[point + stride], pixels[point]) <= interior_limit
}
fn high_edge_variance(threshold: u8, pixels: &[u8], point: usize, stride: usize) -> bool {
diff(pixels[point - 2 * stride], pixels[point - stride]) > threshold
|| diff(pixels[point + stride], pixels[point]) > threshold
}
//simple filter
//effects 4 pixels on an edge(2 each side)
pub(crate) fn simple_segment(edge_limit: u8, pixels: &mut [u8], point: usize, stride: usize) {
if simple_threshold(i32::from(edge_limit), pixels, point, stride) {
common_adjust(true, pixels, point, stride);
}
}
//normal filter
//works on the 8 pixels on the edges between subblocks inside a macroblock
pub(crate) fn subblock_filter(
hev_threshold: u8,
interior_limit: u8,
edge_limit: u8,
pixels: &mut [u8],
point: usize,
stride: usize,
) {
if should_filter(interior_limit, edge_limit, pixels, point, stride) {
let hv = high_edge_variance(hev_threshold, pixels, point, stride);
let a = (common_adjust(hv, pixels, point, stride) + 1) >> 1;
if !hv {
pixels[point + stride] = s2u(u2s(pixels[point + stride]) - a);
pixels[point - 2 * stride] = s2u(u2s(pixels[point - 2 * stride]) - a);
}
}
}
//normal filter
//works on the 8 pixels on the edges between macroblocks
pub(crate) fn macroblock_filter(
hev_threshold: u8,
interior_limit: u8,
edge_limit: u8,
pixels: &mut [u8],
point: usize,
stride: usize,
) {
let mut spixels = [0i32; 8];
for i in 0..8 {
spixels[i] = u2s(pixels[point + i * stride - 4 * stride]);
}
if should_filter(interior_limit, edge_limit, pixels, point, stride) {
if !high_edge_variance(hev_threshold, pixels, point, stride) {
let w = c(c(spixels[2] - spixels[5]) + 3 * (spixels[4] - spixels[3]));
let mut a = c((27 * w + 63) >> 7);
pixels[point] = s2u(spixels[4] - a);
pixels[point - stride] = s2u(spixels[3] + a);
a = c((18 * w + 63) >> 7);
pixels[point + stride] = s2u(spixels[5] - a);
pixels[point - 2 * stride] = s2u(spixels[2] + a);
a = c((9 * w + 63) >> 7);
pixels[point + 2 * stride] = s2u(spixels[6] - a);
pixels[point - 3 * stride] = s2u(spixels[1] + a);
} else {
common_adjust(true, pixels, point, stride);
}
}
}

783
vendor/image/src/codecs/webp/lossless.rs vendored Normal file
View File

@@ -0,0 +1,783 @@
//! Decoding of lossless WebP images
//!
//! [Lossless spec](https://developers.google.com/speed/webp/docs/webp_lossless_bitstream_specification)
//!
use std::{
convert::TryFrom,
convert::TryInto,
error, fmt,
io::Read,
ops::{AddAssign, Shl},
};
use byteorder::ReadBytesExt;
use crate::{error::DecodingError, ImageError, ImageFormat, ImageResult};
use super::huffman::HuffmanTree;
use super::lossless_transform::{add_pixels, TransformType};
const CODE_LENGTH_CODES: usize = 19;
const CODE_LENGTH_CODE_ORDER: [usize; CODE_LENGTH_CODES] = [
17, 18, 0, 1, 2, 3, 4, 5, 16, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
];
#[rustfmt::skip]
const DISTANCE_MAP: [(i8, i8); 120] = [
(0, 1), (1, 0), (1, 1), (-1, 1), (0, 2), (2, 0), (1, 2), (-1, 2),
(2, 1), (-2, 1), (2, 2), (-2, 2), (0, 3), (3, 0), (1, 3), (-1, 3),
(3, 1), (-3, 1), (2, 3), (-2, 3), (3, 2), (-3, 2), (0, 4), (4, 0),
(1, 4), (-1, 4), (4, 1), (-4, 1), (3, 3), (-3, 3), (2, 4), (-2, 4),
(4, 2), (-4, 2), (0, 5), (3, 4), (-3, 4), (4, 3), (-4, 3), (5, 0),
(1, 5), (-1, 5), (5, 1), (-5, 1), (2, 5), (-2, 5), (5, 2), (-5, 2),
(4, 4), (-4, 4), (3, 5), (-3, 5), (5, 3), (-5, 3), (0, 6), (6, 0),
(1, 6), (-1, 6), (6, 1), (-6, 1), (2, 6), (-2, 6), (6, 2), (-6, 2),
(4, 5), (-4, 5), (5, 4), (-5, 4), (3, 6), (-3, 6), (6, 3), (-6, 3),
(0, 7), (7, 0), (1, 7), (-1, 7), (5, 5), (-5, 5), (7, 1), (-7, 1),
(4, 6), (-4, 6), (6, 4), (-6, 4), (2, 7), (-2, 7), (7, 2), (-7, 2),
(3, 7), (-3, 7), (7, 3), (-7, 3), (5, 6), (-5, 6), (6, 5), (-6, 5),
(8, 0), (4, 7), (-4, 7), (7, 4), (-7, 4), (8, 1), (8, 2), (6, 6),
(-6, 6), (8, 3), (5, 7), (-5, 7), (7, 5), (-7, 5), (8, 4), (6, 7),
(-6, 7), (7, 6), (-7, 6), (8, 5), (7, 7), (-7, 7), (8, 6), (8, 7)
];
const GREEN: usize = 0;
const RED: usize = 1;
const BLUE: usize = 2;
const ALPHA: usize = 3;
const DIST: usize = 4;
const HUFFMAN_CODES_PER_META_CODE: usize = 5;
type HuffmanCodeGroup = [HuffmanTree; HUFFMAN_CODES_PER_META_CODE];
const ALPHABET_SIZE: [u16; HUFFMAN_CODES_PER_META_CODE] = [256 + 24, 256, 256, 256, 40];
#[inline]
pub(crate) fn subsample_size(size: u16, bits: u8) -> u16 {
((u32::from(size) + (1u32 << bits) - 1) >> bits)
.try_into()
.unwrap()
}
#[derive(Debug, Clone, Copy)]
pub(crate) enum DecoderError {
/// Signature of 0x2f not found
LosslessSignatureInvalid(u8),
/// Version Number must be 0
VersionNumberInvalid(u8),
///
InvalidColorCacheBits(u8),
HuffmanError,
BitStreamError,
TransformError,
}
impl fmt::Display for DecoderError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
DecoderError::LosslessSignatureInvalid(sig) => {
f.write_fmt(format_args!("Invalid lossless signature: {}", sig))
}
DecoderError::VersionNumberInvalid(num) => {
f.write_fmt(format_args!("Invalid version number: {}", num))
}
DecoderError::InvalidColorCacheBits(num) => f.write_fmt(format_args!(
"Invalid color cache(must be between 1-11): {}",
num
)),
DecoderError::HuffmanError => f.write_fmt(format_args!("Error building Huffman Tree")),
DecoderError::BitStreamError => {
f.write_fmt(format_args!("Error while reading bitstream"))
}
DecoderError::TransformError => {
f.write_fmt(format_args!("Error while reading or writing transforms"))
}
}
}
}
impl From<DecoderError> for ImageError {
fn from(e: DecoderError) -> ImageError {
ImageError::Decoding(DecodingError::new(ImageFormat::WebP.into(), e))
}
}
impl error::Error for DecoderError {}
const NUM_TRANSFORM_TYPES: usize = 4;
//Decodes lossless WebP images
#[derive(Debug)]
pub(crate) struct LosslessDecoder<R> {
r: R,
bit_reader: BitReader,
frame: LosslessFrame,
transforms: [Option<TransformType>; NUM_TRANSFORM_TYPES],
transform_order: Vec<u8>,
}
impl<R: Read> LosslessDecoder<R> {
/// Create a new decoder
pub(crate) fn new(r: R) -> LosslessDecoder<R> {
LosslessDecoder {
r,
bit_reader: BitReader::new(),
frame: Default::default(),
transforms: [None, None, None, None],
transform_order: Vec::new(),
}
}
/// Reads the frame
pub(crate) fn decode_frame(&mut self) -> ImageResult<&LosslessFrame> {
let signature = self.r.read_u8()?;
if signature != 0x2f {
return Err(DecoderError::LosslessSignatureInvalid(signature).into());
}
let mut buf = Vec::new();
self.r.read_to_end(&mut buf)?;
self.bit_reader.init(buf);
self.frame.width = self.bit_reader.read_bits::<u16>(14)? + 1;
self.frame.height = self.bit_reader.read_bits::<u16>(14)? + 1;
let _alpha_used = self.bit_reader.read_bits::<u8>(1)?;
let version_num = self.bit_reader.read_bits::<u8>(3)?;
if version_num != 0 {
return Err(DecoderError::VersionNumberInvalid(version_num).into());
}
let mut data = self.decode_image_stream(self.frame.width, self.frame.height, true)?;
for &trans_index in self.transform_order.iter().rev() {
let trans = self.transforms[usize::from(trans_index)].as_ref().unwrap();
trans.apply_transform(&mut data, self.frame.width, self.frame.height)?;
}
self.frame.buf = data;
Ok(&self.frame)
}
//used for alpha data in extended decoding
pub(crate) fn decode_frame_implicit_dims(
&mut self,
width: u16,
height: u16,
) -> ImageResult<&LosslessFrame> {
let mut buf = Vec::new();
self.r.read_to_end(&mut buf)?;
self.bit_reader.init(buf);
self.frame.width = width;
self.frame.height = height;
let mut data = self.decode_image_stream(self.frame.width, self.frame.height, true)?;
//transform_order is vector of indices(0-3) into transforms in order decoded
for &trans_index in self.transform_order.iter().rev() {
let trans = self.transforms[usize::from(trans_index)].as_ref().unwrap();
trans.apply_transform(&mut data, self.frame.width, self.frame.height)?;
}
self.frame.buf = data;
Ok(&self.frame)
}
/// Reads Image data from the bitstream
/// Can be in any of the 5 roles described in the Specification
/// ARGB Image role has different behaviour to the other 4
/// xsize and ysize describe the size of the blocks where each block has its own entropy code
fn decode_image_stream(
&mut self,
xsize: u16,
ysize: u16,
is_argb_img: bool,
) -> ImageResult<Vec<u32>> {
let trans_xsize = if is_argb_img {
self.read_transforms()?
} else {
xsize
};
let color_cache_bits = self.read_color_cache()?;
let color_cache = color_cache_bits.map(|bits| {
let size = 1 << bits;
let cache = vec![0u32; size];
ColorCache {
color_cache_bits: bits,
color_cache: cache,
}
});
let huffman_info = self.read_huffman_codes(is_argb_img, trans_xsize, ysize, color_cache)?;
//decode data
let data = self.decode_image_data(trans_xsize, ysize, huffman_info)?;
Ok(data)
}
/// Reads transforms and their data from the bitstream
fn read_transforms(&mut self) -> ImageResult<u16> {
let mut xsize = self.frame.width;
while self.bit_reader.read_bits::<u8>(1)? == 1 {
let transform_type_val = self.bit_reader.read_bits::<u8>(2)?;
if self.transforms[usize::from(transform_type_val)].is_some() {
//can only have one of each transform, error
return Err(DecoderError::TransformError.into());
}
self.transform_order.push(transform_type_val);
let transform_type = match transform_type_val {
0 => {
//predictor
let size_bits = self.bit_reader.read_bits::<u8>(3)? + 2;
let block_xsize = subsample_size(xsize, size_bits);
let block_ysize = subsample_size(self.frame.height, size_bits);
let data = self.decode_image_stream(block_xsize, block_ysize, false)?;
TransformType::PredictorTransform {
size_bits,
predictor_data: data,
}
}
1 => {
//color transform
let size_bits = self.bit_reader.read_bits::<u8>(3)? + 2;
let block_xsize = subsample_size(xsize, size_bits);
let block_ysize = subsample_size(self.frame.height, size_bits);
let data = self.decode_image_stream(block_xsize, block_ysize, false)?;
TransformType::ColorTransform {
size_bits,
transform_data: data,
}
}
2 => {
//subtract green
TransformType::SubtractGreen
}
3 => {
let color_table_size = self.bit_reader.read_bits::<u16>(8)? + 1;
let mut color_map = self.decode_image_stream(color_table_size, 1, false)?;
let bits = if color_table_size <= 2 {
3
} else if color_table_size <= 4 {
2
} else if color_table_size <= 16 {
1
} else {
0
};
xsize = subsample_size(xsize, bits);
Self::adjust_color_map(&mut color_map);
TransformType::ColorIndexingTransform {
table_size: color_table_size,
table_data: color_map,
}
}
_ => unreachable!(),
};
self.transforms[usize::from(transform_type_val)] = Some(transform_type);
}
Ok(xsize)
}
/// Adjusts the color map since it's subtraction coded
fn adjust_color_map(color_map: &mut Vec<u32>) {
for i in 1..color_map.len() {
color_map[i] = add_pixels(color_map[i], color_map[i - 1]);
}
}
/// Reads huffman codes associated with an image
fn read_huffman_codes(
&mut self,
read_meta: bool,
xsize: u16,
ysize: u16,
color_cache: Option<ColorCache>,
) -> ImageResult<HuffmanInfo> {
let mut num_huff_groups = 1;
let mut huffman_bits = 0;
let mut huffman_xsize = 1;
let mut huffman_ysize = 1;
let mut entropy_image = Vec::new();
if read_meta && self.bit_reader.read_bits::<u8>(1)? == 1 {
//meta huffman codes
huffman_bits = self.bit_reader.read_bits::<u8>(3)? + 2;
huffman_xsize = subsample_size(xsize, huffman_bits);
huffman_ysize = subsample_size(ysize, huffman_bits);
entropy_image = self.decode_image_stream(huffman_xsize, huffman_ysize, false)?;
for pixel in entropy_image.iter_mut() {
let meta_huff_code = (*pixel >> 8) & 0xffff;
*pixel = meta_huff_code;
if meta_huff_code >= num_huff_groups {
num_huff_groups = meta_huff_code + 1;
}
}
}
let mut hufftree_groups = Vec::new();
for _i in 0..num_huff_groups {
let mut group: HuffmanCodeGroup = Default::default();
for j in 0..HUFFMAN_CODES_PER_META_CODE {
let mut alphabet_size = ALPHABET_SIZE[j];
if j == 0 {
if let Some(color_cache) = color_cache.as_ref() {
alphabet_size += 1 << color_cache.color_cache_bits;
}
}
let tree = self.read_huffman_code(alphabet_size)?;
group[j] = tree;
}
hufftree_groups.push(group);
}
let huffman_mask = if huffman_bits == 0 {
!0
} else {
(1 << huffman_bits) - 1
};
let info = HuffmanInfo {
xsize: huffman_xsize,
_ysize: huffman_ysize,
color_cache,
image: entropy_image,
bits: huffman_bits,
mask: huffman_mask,
huffman_code_groups: hufftree_groups,
};
Ok(info)
}
/// Decodes and returns a single huffman tree
fn read_huffman_code(&mut self, alphabet_size: u16) -> ImageResult<HuffmanTree> {
let simple = self.bit_reader.read_bits::<u8>(1)? == 1;
if simple {
let num_symbols = self.bit_reader.read_bits::<u8>(1)? + 1;
let mut code_lengths = vec![u16::from(num_symbols - 1)];
let mut codes = vec![0];
let mut symbols = Vec::new();
let is_first_8bits = self.bit_reader.read_bits::<u8>(1)?;
symbols.push(self.bit_reader.read_bits::<u16>(1 + 7 * is_first_8bits)?);
if num_symbols == 2 {
symbols.push(self.bit_reader.read_bits::<u16>(8)?);
code_lengths.push(1);
codes.push(1);
}
HuffmanTree::build_explicit(code_lengths, codes, symbols)
} else {
let mut code_length_code_lengths = vec![0; CODE_LENGTH_CODES];
let num_code_lengths = 4 + self.bit_reader.read_bits::<usize>(4)?;
for i in 0..num_code_lengths {
code_length_code_lengths[CODE_LENGTH_CODE_ORDER[i]] =
self.bit_reader.read_bits(3)?;
}
let new_code_lengths =
self.read_huffman_code_lengths(code_length_code_lengths, alphabet_size)?;
HuffmanTree::build_implicit(new_code_lengths)
}
}
/// Reads huffman code lengths
fn read_huffman_code_lengths(
&mut self,
code_length_code_lengths: Vec<u16>,
num_symbols: u16,
) -> ImageResult<Vec<u16>> {
let table = HuffmanTree::build_implicit(code_length_code_lengths)?;
let mut max_symbol = if self.bit_reader.read_bits::<u8>(1)? == 1 {
let length_nbits = 2 + 2 * self.bit_reader.read_bits::<u8>(3)?;
2 + self.bit_reader.read_bits::<u16>(length_nbits)?
} else {
num_symbols
};
let mut code_lengths = vec![0; usize::from(num_symbols)];
let mut prev_code_len = 8; //default code length
let mut symbol = 0;
while symbol < num_symbols {
if max_symbol == 0 {
break;
}
max_symbol -= 1;
let code_len = table.read_symbol(&mut self.bit_reader)?;
if code_len < 16 {
code_lengths[usize::from(symbol)] = code_len;
symbol += 1;
if code_len != 0 {
prev_code_len = code_len;
}
} else {
let use_prev = code_len == 16;
let slot = code_len - 16;
let extra_bits = match slot {
0 => 2,
1 => 3,
2 => 7,
_ => return Err(DecoderError::BitStreamError.into()),
};
let repeat_offset = match slot {
0 | 1 => 3,
2 => 11,
_ => return Err(DecoderError::BitStreamError.into()),
};
let mut repeat = self.bit_reader.read_bits::<u16>(extra_bits)? + repeat_offset;
if symbol + repeat > num_symbols {
return Err(DecoderError::BitStreamError.into());
} else {
let length = if use_prev { prev_code_len } else { 0 };
while repeat > 0 {
repeat -= 1;
code_lengths[usize::from(symbol)] = length;
symbol += 1;
}
}
}
}
Ok(code_lengths)
}
/// Decodes the image data using the huffman trees and either of the 3 methods of decoding
fn decode_image_data(
&mut self,
width: u16,
height: u16,
mut huffman_info: HuffmanInfo,
) -> ImageResult<Vec<u32>> {
let num_values = usize::from(width) * usize::from(height);
let mut data = vec![0; num_values];
let huff_index = huffman_info.get_huff_index(0, 0);
let mut tree = &huffman_info.huffman_code_groups[huff_index];
let mut last_cached = 0;
let mut index = 0;
let mut x = 0;
let mut y = 0;
while index < num_values {
if (x & huffman_info.mask) == 0 {
let index = huffman_info.get_huff_index(x, y);
tree = &huffman_info.huffman_code_groups[index];
}
let code = tree[GREEN].read_symbol(&mut self.bit_reader)?;
//check code
if code < 256 {
//literal, so just use huffman codes and read as argb
let red = tree[RED].read_symbol(&mut self.bit_reader)?;
let blue = tree[BLUE].read_symbol(&mut self.bit_reader)?;
let alpha = tree[ALPHA].read_symbol(&mut self.bit_reader)?;
data[index] = (u32::from(alpha) << 24)
+ (u32::from(red) << 16)
+ (u32::from(code) << 8)
+ u32::from(blue);
index += 1;
x += 1;
if x >= width {
x = 0;
y += 1;
}
} else if code < 256 + 24 {
//backward reference, so go back and use that to add image data
let length_symbol = code - 256;
let length = Self::get_copy_distance(&mut self.bit_reader, length_symbol)?;
let dist_symbol = tree[DIST].read_symbol(&mut self.bit_reader)?;
let dist_code = Self::get_copy_distance(&mut self.bit_reader, dist_symbol)?;
let dist = Self::plane_code_to_distance(width, dist_code);
if index < dist || num_values - index < length {
return Err(DecoderError::BitStreamError.into());
}
for i in 0..length {
data[index + i] = data[index + i - dist];
}
index += length;
x += u16::try_from(length).unwrap();
while x >= width {
x -= width;
y += 1;
}
if index < num_values {
let index = huffman_info.get_huff_index(x, y);
tree = &huffman_info.huffman_code_groups[index];
}
} else {
//color cache, so use previously stored pixels to get this pixel
let key = code - 256 - 24;
if let Some(color_cache) = huffman_info.color_cache.as_mut() {
//cache old colors
while last_cached < index {
color_cache.insert(data[last_cached]);
last_cached += 1;
}
data[index] = color_cache.lookup(key.into())?;
} else {
return Err(DecoderError::BitStreamError.into());
}
index += 1;
x += 1;
if x >= width {
x = 0;
y += 1;
}
}
}
Ok(data)
}
/// Reads color cache data from the bitstream
fn read_color_cache(&mut self) -> ImageResult<Option<u8>> {
if self.bit_reader.read_bits::<u8>(1)? == 1 {
let code_bits = self.bit_reader.read_bits::<u8>(4)?;
if !(1..=11).contains(&code_bits) {
return Err(DecoderError::InvalidColorCacheBits(code_bits).into());
}
Ok(Some(code_bits))
} else {
Ok(None)
}
}
/// Gets the copy distance from the prefix code and bitstream
fn get_copy_distance(bit_reader: &mut BitReader, prefix_code: u16) -> ImageResult<usize> {
if prefix_code < 4 {
return Ok(usize::from(prefix_code + 1));
}
let extra_bits: u8 = ((prefix_code - 2) >> 1).try_into().unwrap();
let offset = (2 + (usize::from(prefix_code) & 1)) << extra_bits;
Ok(offset + bit_reader.read_bits::<usize>(extra_bits)? + 1)
}
/// Gets distance to pixel
fn plane_code_to_distance(xsize: u16, plane_code: usize) -> usize {
if plane_code > 120 {
plane_code - 120
} else {
let (xoffset, yoffset) = DISTANCE_MAP[plane_code - 1];
let dist = i32::from(xoffset) + i32::from(yoffset) * i32::from(xsize);
if dist < 1 {
return 1;
}
dist.try_into().unwrap()
}
}
}
#[derive(Debug, Clone)]
struct HuffmanInfo {
xsize: u16,
_ysize: u16,
color_cache: Option<ColorCache>,
image: Vec<u32>,
bits: u8,
mask: u16,
huffman_code_groups: Vec<HuffmanCodeGroup>,
}
impl HuffmanInfo {
fn get_huff_index(&self, x: u16, y: u16) -> usize {
if self.bits == 0 {
return 0;
}
let position = usize::from((y >> self.bits) * self.xsize + (x >> self.bits));
let meta_huff_code: usize = self.image[position].try_into().unwrap();
meta_huff_code
}
}
#[derive(Debug, Clone)]
struct ColorCache {
color_cache_bits: u8,
color_cache: Vec<u32>,
}
impl ColorCache {
fn insert(&mut self, color: u32) {
let index = (0x1e35a7bdu32.overflowing_mul(color).0) >> (32 - self.color_cache_bits);
self.color_cache[index as usize] = color;
}
fn lookup(&self, index: usize) -> ImageResult<u32> {
match self.color_cache.get(index) {
Some(&value) => Ok(value),
None => Err(DecoderError::BitStreamError.into()),
}
}
}
#[derive(Debug, Clone)]
pub(crate) struct BitReader {
buf: Vec<u8>,
index: usize,
bit_count: u8,
}
impl BitReader {
fn new() -> BitReader {
BitReader {
buf: Vec::new(),
index: 0,
bit_count: 0,
}
}
fn init(&mut self, buf: Vec<u8>) {
self.buf = buf;
}
pub(crate) fn read_bits<T>(&mut self, num: u8) -> ImageResult<T>
where
T: num_traits::Unsigned + Shl<u8, Output = T> + AddAssign<T> + From<bool>,
{
let mut value: T = T::zero();
for i in 0..num {
if self.buf.len() <= self.index {
return Err(DecoderError::BitStreamError.into());
}
let bit_true = self.buf[self.index] & (1 << self.bit_count) != 0;
value += T::from(bit_true) << i;
self.bit_count = if self.bit_count == 7 {
self.index += 1;
0
} else {
self.bit_count + 1
};
}
Ok(value)
}
}
#[derive(Debug, Clone, Default)]
pub(crate) struct LosslessFrame {
pub(crate) width: u16,
pub(crate) height: u16,
pub(crate) buf: Vec<u32>,
}
impl LosslessFrame {
/// Fills a buffer by converting from argb to rgba
pub(crate) fn fill_rgba(&self, buf: &mut [u8]) {
for (&argb_val, chunk) in self.buf.iter().zip(buf.chunks_exact_mut(4)) {
chunk[0] = ((argb_val >> 16) & 0xff).try_into().unwrap();
chunk[1] = ((argb_val >> 8) & 0xff).try_into().unwrap();
chunk[2] = (argb_val & 0xff).try_into().unwrap();
chunk[3] = ((argb_val >> 24) & 0xff).try_into().unwrap();
}
}
/// Get buffer size from the image
pub(crate) fn get_buf_size(&self) -> usize {
usize::from(self.width) * usize::from(self.height) * 4
}
/// Fills a buffer with just the green values from the lossless decoding
/// Used in extended alpha decoding
pub(crate) fn fill_green(&self, buf: &mut [u8]) {
for (&argb_val, buf_value) in self.buf.iter().zip(buf.iter_mut()) {
*buf_value = ((argb_val >> 8) & 0xff).try_into().unwrap();
}
}
}
#[cfg(test)]
mod test {
use super::BitReader;
#[test]
fn bit_read_test() {
let mut bit_reader = BitReader::new();
//10011100 01000001 11100001
let buf = vec![0x9C, 0x41, 0xE1];
bit_reader.init(buf);
assert_eq!(bit_reader.read_bits::<u8>(3).unwrap(), 4); //100
assert_eq!(bit_reader.read_bits::<u8>(2).unwrap(), 3); //11
assert_eq!(bit_reader.read_bits::<u8>(6).unwrap(), 12); //001100
assert_eq!(bit_reader.read_bits::<u16>(10).unwrap(), 40); //0000101000
assert_eq!(bit_reader.read_bits::<u8>(3).unwrap(), 7); //111
}
#[test]
fn bit_read_error_test() {
let mut bit_reader = BitReader::new();
//01101010
let buf = vec![0x6A];
bit_reader.init(buf);
assert_eq!(bit_reader.read_bits::<u8>(3).unwrap(), 2); //010
assert_eq!(bit_reader.read_bits::<u8>(5).unwrap(), 13); //01101
assert!(bit_reader.read_bits::<u8>(4).is_err()); //error
}
}

View File

@@ -0,0 +1,464 @@
use std::convert::TryFrom;
use std::convert::TryInto;
use super::lossless::subsample_size;
use super::lossless::DecoderError;
#[derive(Debug, Clone)]
pub(crate) enum TransformType {
PredictorTransform {
size_bits: u8,
predictor_data: Vec<u32>,
},
ColorTransform {
size_bits: u8,
transform_data: Vec<u32>,
},
SubtractGreen,
ColorIndexingTransform {
table_size: u16,
table_data: Vec<u32>,
},
}
impl TransformType {
/// Applies a transform to the image data
pub(crate) fn apply_transform(
&self,
image_data: &mut Vec<u32>,
width: u16,
height: u16,
) -> Result<(), DecoderError> {
match self {
TransformType::PredictorTransform {
size_bits,
predictor_data,
} => {
let block_xsize = usize::from(subsample_size(width, *size_bits));
let width = usize::from(width);
let height = usize::from(height);
if image_data.len() < width * height {
return Err(DecoderError::TransformError);
}
//handle top and left borders specially
//this involves ignoring mode and just setting prediction values like this
image_data[0] = add_pixels(image_data[0], 0xff000000);
for x in 1..width {
image_data[x] = add_pixels(image_data[x], get_left(image_data, x, 0, width));
}
for y in 1..height {
image_data[y * width] =
add_pixels(image_data[y * width], get_top(image_data, 0, y, width));
}
for y in 1..height {
for x in 1..width {
let block_index = (y >> size_bits) * block_xsize + (x >> size_bits);
let index = y * width + x;
let green = (predictor_data[block_index] >> 8) & 0xff;
match green {
0 => image_data[index] = add_pixels(image_data[index], 0xff000000),
1 => {
image_data[index] =
add_pixels(image_data[index], get_left(image_data, x, y, width))
}
2 => {
image_data[index] =
add_pixels(image_data[index], get_top(image_data, x, y, width))
}
3 => {
image_data[index] = add_pixels(
image_data[index],
get_top_right(image_data, x, y, width),
)
}
4 => {
image_data[index] = add_pixels(
image_data[index],
get_top_left(image_data, x, y, width),
)
}
5 => {
image_data[index] = add_pixels(image_data[index], {
let first = average2(
get_left(image_data, x, y, width),
get_top_right(image_data, x, y, width),
);
average2(first, get_top(image_data, x, y, width))
})
}
6 => {
image_data[index] = add_pixels(
image_data[index],
average2(
get_left(image_data, x, y, width),
get_top_left(image_data, x, y, width),
),
)
}
7 => {
image_data[index] = add_pixels(
image_data[index],
average2(
get_left(image_data, x, y, width),
get_top(image_data, x, y, width),
),
)
}
8 => {
image_data[index] = add_pixels(
image_data[index],
average2(
get_top_left(image_data, x, y, width),
get_top(image_data, x, y, width),
),
)
}
9 => {
image_data[index] = add_pixels(
image_data[index],
average2(
get_top(image_data, x, y, width),
get_top_right(image_data, x, y, width),
),
)
}
10 => {
image_data[index] = add_pixels(image_data[index], {
let first = average2(
get_left(image_data, x, y, width),
get_top_left(image_data, x, y, width),
);
let second = average2(
get_top(image_data, x, y, width),
get_top_right(image_data, x, y, width),
);
average2(first, second)
})
}
11 => {
image_data[index] = add_pixels(
image_data[index],
select(
get_left(image_data, x, y, width),
get_top(image_data, x, y, width),
get_top_left(image_data, x, y, width),
),
)
}
12 => {
image_data[index] = add_pixels(
image_data[index],
clamp_add_subtract_full(
get_left(image_data, x, y, width),
get_top(image_data, x, y, width),
get_top_left(image_data, x, y, width),
),
)
}
13 => {
image_data[index] = add_pixels(image_data[index], {
let first = average2(
get_left(image_data, x, y, width),
get_top(image_data, x, y, width),
);
clamp_add_subtract_half(
first,
get_top_left(image_data, x, y, width),
)
})
}
_ => {}
}
}
}
}
TransformType::ColorTransform {
size_bits,
transform_data,
} => {
let block_xsize = usize::from(subsample_size(width, *size_bits));
let width = usize::from(width);
let height = usize::from(height);
for y in 0..height {
for x in 0..width {
let block_index = (y >> size_bits) * block_xsize + (x >> size_bits);
let index = y * width + x;
let multiplier =
ColorTransformElement::from_color_code(transform_data[block_index]);
image_data[index] = transform_color(&multiplier, image_data[index]);
}
}
}
TransformType::SubtractGreen => {
let width = usize::from(width);
for y in 0..usize::from(height) {
for x in 0..width {
image_data[y * width + x] = add_green(image_data[y * width + x]);
}
}
}
TransformType::ColorIndexingTransform {
table_size,
table_data,
} => {
let mut new_image_data =
Vec::with_capacity(usize::from(width) * usize::from(height));
let table_size = *table_size;
let width_bits: u8 = if table_size <= 2 {
3
} else if table_size <= 4 {
2
} else if table_size <= 16 {
1
} else {
0
};
let bits_per_pixel = 8 >> width_bits;
let mask = (1 << bits_per_pixel) - 1;
let mut src = 0;
let width = usize::from(width);
let pixels_per_byte = 1 << width_bits;
let count_mask = pixels_per_byte - 1;
let mut packed_pixels = 0;
for _y in 0..usize::from(height) {
for x in 0..width {
if (x & count_mask) == 0 {
packed_pixels = (image_data[src] >> 8) & 0xff;
src += 1;
}
let pixels: usize = (packed_pixels & mask).try_into().unwrap();
let new_val = if pixels >= table_size.into() {
0x00000000
} else {
table_data[pixels]
};
new_image_data.push(new_val);
packed_pixels >>= bits_per_pixel;
}
}
*image_data = new_image_data;
}
}
Ok(())
}
}
//predictor functions
/// Adds 2 pixels mod 256 for each pixel
pub(crate) fn add_pixels(a: u32, b: u32) -> u32 {
let new_alpha = ((a >> 24) + (b >> 24)) & 0xff;
let new_red = (((a >> 16) & 0xff) + ((b >> 16) & 0xff)) & 0xff;
let new_green = (((a >> 8) & 0xff) + ((b >> 8) & 0xff)) & 0xff;
let new_blue = ((a & 0xff) + (b & 0xff)) & 0xff;
(new_alpha << 24) + (new_red << 16) + (new_green << 8) + new_blue
}
/// Get left pixel
fn get_left(data: &[u32], x: usize, y: usize, width: usize) -> u32 {
data[y * width + x - 1]
}
/// Get top pixel
fn get_top(data: &[u32], x: usize, y: usize, width: usize) -> u32 {
data[(y - 1) * width + x]
}
/// Get pixel to top right
fn get_top_right(data: &[u32], x: usize, y: usize, width: usize) -> u32 {
// if x == width - 1 this gets the left most pixel of the current row
// as described in the specification
data[(y - 1) * width + x + 1]
}
/// Get pixel to top left
fn get_top_left(data: &[u32], x: usize, y: usize, width: usize) -> u32 {
data[(y - 1) * width + x - 1]
}
/// Get average of 2 pixels
fn average2(a: u32, b: u32) -> u32 {
let mut avg = 0u32;
for i in 0..4 {
let sub_a: u8 = ((a >> (i * 8)) & 0xff).try_into().unwrap();
let sub_b: u8 = ((b >> (i * 8)) & 0xff).try_into().unwrap();
avg |= u32::from(sub_average2(sub_a, sub_b)) << (i * 8);
}
avg
}
/// Get average of 2 bytes
fn sub_average2(a: u8, b: u8) -> u8 {
((u16::from(a) + u16::from(b)) / 2).try_into().unwrap()
}
/// Get a specific byte from argb pixel
fn get_byte(val: u32, byte: u8) -> u8 {
((val >> (byte * 8)) & 0xff).try_into().unwrap()
}
/// Get byte as i32 for convenience
fn get_byte_i32(val: u32, byte: u8) -> i32 {
i32::from(get_byte(val, byte))
}
/// Select left or top byte
fn select(left: u32, top: u32, top_left: u32) -> u32 {
let predict_alpha = get_byte_i32(left, 3) + get_byte_i32(top, 3) - get_byte_i32(top_left, 3);
let predict_red = get_byte_i32(left, 2) + get_byte_i32(top, 2) - get_byte_i32(top_left, 2);
let predict_green = get_byte_i32(left, 1) + get_byte_i32(top, 1) - get_byte_i32(top_left, 1);
let predict_blue = get_byte_i32(left, 0) + get_byte_i32(top, 0) - get_byte_i32(top_left, 0);
let predict_left = i32::abs(predict_alpha - get_byte_i32(left, 3))
+ i32::abs(predict_red - get_byte_i32(left, 2))
+ i32::abs(predict_green - get_byte_i32(left, 1))
+ i32::abs(predict_blue - get_byte_i32(left, 0));
let predict_top = i32::abs(predict_alpha - get_byte_i32(top, 3))
+ i32::abs(predict_red - get_byte_i32(top, 2))
+ i32::abs(predict_green - get_byte_i32(top, 1))
+ i32::abs(predict_blue - get_byte_i32(top, 0));
if predict_left < predict_top {
left
} else {
top
}
}
/// Clamp a to [0, 255]
fn clamp(a: i32) -> i32 {
if a < 0 {
0
} else if a > 255 {
255
} else {
a
}
}
/// Clamp add subtract full on one part
fn clamp_add_subtract_full_sub(a: i32, b: i32, c: i32) -> i32 {
clamp(a + b - c)
}
/// Clamp add subtract half on one part
fn clamp_add_subtract_half_sub(a: i32, b: i32) -> i32 {
clamp(a + (a - b) / 2)
}
/// Clamp add subtract full on 3 pixels
fn clamp_add_subtract_full(a: u32, b: u32, c: u32) -> u32 {
let mut value: u32 = 0;
for i in 0..4u8 {
let sub_a: i32 = ((a >> (i * 8)) & 0xff).try_into().unwrap();
let sub_b: i32 = ((b >> (i * 8)) & 0xff).try_into().unwrap();
let sub_c: i32 = ((c >> (i * 8)) & 0xff).try_into().unwrap();
value |=
u32::try_from(clamp_add_subtract_full_sub(sub_a, sub_b, sub_c)).unwrap() << (i * 8);
}
value
}
/// Clamp add subtract half on 2 pixels
fn clamp_add_subtract_half(a: u32, b: u32) -> u32 {
let mut value = 0;
for i in 0..4u8 {
let sub_a: i32 = ((a >> (i * 8)) & 0xff).try_into().unwrap();
let sub_b: i32 = ((b >> (i * 8)) & 0xff).try_into().unwrap();
value |= u32::try_from(clamp_add_subtract_half_sub(sub_a, sub_b)).unwrap() << (i * 8);
}
value
}
//color transform
#[derive(Debug, Clone, Copy)]
struct ColorTransformElement {
green_to_red: u8,
green_to_blue: u8,
red_to_blue: u8,
}
impl ColorTransformElement {
fn from_color_code(color_code: u32) -> ColorTransformElement {
ColorTransformElement {
green_to_red: (color_code & 0xff).try_into().unwrap(),
green_to_blue: ((color_code >> 8) & 0xff).try_into().unwrap(),
red_to_blue: ((color_code >> 16) & 0xff).try_into().unwrap(),
}
}
}
/// Does color transform on red and blue transformed by green
fn color_transform(red: u8, blue: u8, green: u8, trans: &ColorTransformElement) -> (u8, u8) {
let mut temp_red = u32::from(red);
let mut temp_blue = u32::from(blue);
//as does the conversion from u8 to signed two's complement i8 required
temp_red += color_transform_delta(trans.green_to_red as i8, green as i8);
temp_blue += color_transform_delta(trans.green_to_blue as i8, green as i8);
temp_blue += color_transform_delta(trans.red_to_blue as i8, temp_red as i8);
(
(temp_red & 0xff).try_into().unwrap(),
(temp_blue & 0xff).try_into().unwrap(),
)
}
/// Does color transform on 2 numbers
fn color_transform_delta(t: i8, c: i8) -> u32 {
((i16::from(t) * i16::from(c)) as u32) >> 5
}
// Does color transform on a pixel with a color transform element
fn transform_color(multiplier: &ColorTransformElement, color_value: u32) -> u32 {
let alpha = get_byte(color_value, 3);
let red = get_byte(color_value, 2);
let green = get_byte(color_value, 1);
let blue = get_byte(color_value, 0);
let (new_red, new_blue) = color_transform(red, blue, green, multiplier);
(u32::from(alpha) << 24)
+ (u32::from(new_red) << 16)
+ (u32::from(green) << 8)
+ u32::from(new_blue)
}
//subtract green function
/// Adds green to red and blue of a pixel
fn add_green(argb: u32) -> u32 {
let red = (argb >> 16) & 0xff;
let green = (argb >> 8) & 0xff;
let blue = argb & 0xff;
let new_red = (red + green) & 0xff;
let new_blue = (blue + green) & 0xff;
(argb & 0xff00ff00) | (new_red << 16) | (new_blue)
}

28
vendor/image/src/codecs/webp/mod.rs vendored Normal file
View File

@@ -0,0 +1,28 @@
//! Decoding and Encoding of WebP Images
#[cfg(feature = "webp-encoder")]
pub use self::encoder::{WebPEncoder, WebPQuality};
#[cfg(feature = "webp-encoder")]
mod encoder;
#[cfg(feature = "webp")]
pub use self::decoder::WebPDecoder;
#[cfg(feature = "webp")]
mod decoder;
#[cfg(feature = "webp")]
mod extended;
#[cfg(feature = "webp")]
mod huffman;
#[cfg(feature = "webp")]
mod loop_filter;
#[cfg(feature = "webp")]
mod lossless;
#[cfg(feature = "webp")]
mod lossless_transform;
#[cfg(feature = "webp")]
mod transform;
#[cfg(feature = "webp")]
pub mod vp8;

View File

@@ -0,0 +1,77 @@
static CONST1: i64 = 20091;
static CONST2: i64 = 35468;
pub(crate) fn idct4x4(block: &mut [i32]) {
// The intermediate results may overflow the types, so we stretch the type.
fn fetch(block: &mut [i32], idx: usize) -> i64 {
i64::from(block[idx])
}
for i in 0usize..4 {
let a1 = fetch(block, i) + fetch(block, 8 + i);
let b1 = fetch(block, i) - fetch(block, 8 + i);
let t1 = (fetch(block, 4 + i) * CONST2) >> 16;
let t2 = fetch(block, 12 + i) + ((fetch(block, 12 + i) * CONST1) >> 16);
let c1 = t1 - t2;
let t1 = fetch(block, 4 + i) + ((fetch(block, 4 + i) * CONST1) >> 16);
let t2 = (fetch(block, 12 + i) * CONST2) >> 16;
let d1 = t1 + t2;
block[i] = (a1 + d1) as i32;
block[4 + i] = (b1 + c1) as i32;
block[4 * 3 + i] = (a1 - d1) as i32;
block[4 * 2 + i] = (b1 - c1) as i32;
}
for i in 0usize..4 {
let a1 = fetch(block, 4 * i) + fetch(block, 4 * i + 2);
let b1 = fetch(block, 4 * i) - fetch(block, 4 * i + 2);
let t1 = (fetch(block, 4 * i + 1) * CONST2) >> 16;
let t2 = fetch(block, 4 * i + 3) + ((fetch(block, 4 * i + 3) * CONST1) >> 16);
let c1 = t1 - t2;
let t1 = fetch(block, 4 * i + 1) + ((fetch(block, 4 * i + 1) * CONST1) >> 16);
let t2 = (fetch(block, 4 * i + 3) * CONST2) >> 16;
let d1 = t1 + t2;
block[4 * i] = ((a1 + d1 + 4) >> 3) as i32;
block[4 * i + 3] = ((a1 - d1 + 4) >> 3) as i32;
block[4 * i + 1] = ((b1 + c1 + 4) >> 3) as i32;
block[4 * i + 2] = ((b1 - c1 + 4) >> 3) as i32;
}
}
// 14.3
pub(crate) fn iwht4x4(block: &mut [i32]) {
for i in 0usize..4 {
let a1 = block[i] + block[12 + i];
let b1 = block[4 + i] + block[8 + i];
let c1 = block[4 + i] - block[8 + i];
let d1 = block[i] - block[12 + i];
block[i] = a1 + b1;
block[4 + i] = c1 + d1;
block[8 + i] = a1 - b1;
block[12 + i] = d1 - c1;
}
for i in 0usize..4 {
let a1 = block[4 * i] + block[4 * i + 3];
let b1 = block[4 * i + 1] + block[4 * i + 2];
let c1 = block[4 * i + 1] - block[4 * i + 2];
let d1 = block[4 * i] - block[4 * i + 3];
let a2 = a1 + b1;
let b2 = c1 + d1;
let c2 = a1 - b1;
let d2 = d1 - c1;
block[4 * i] = (a2 + 3) >> 3;
block[4 * i + 1] = (b2 + 3) >> 3;
block[4 * i + 2] = (c2 + 3) >> 3;
block[4 * i + 3] = (d2 + 3) >> 3;
}
}

2932
vendor/image/src/codecs/webp/vp8.rs vendored Normal file

File diff suppressed because it is too large Load Diff

985
vendor/image/src/color.rs vendored Normal file
View File

@@ -0,0 +1,985 @@
use std::ops::{Index, IndexMut};
use num_traits::{NumCast, ToPrimitive, Zero};
use crate::traits::{Enlargeable, Pixel, Primitive};
/// An enumeration over supported color types and bit depths
#[derive(Copy, PartialEq, Eq, Debug, Clone, Hash)]
#[non_exhaustive]
pub enum ColorType {
/// Pixel is 8-bit luminance
L8,
/// Pixel is 8-bit luminance with an alpha channel
La8,
/// Pixel contains 8-bit R, G and B channels
Rgb8,
/// Pixel is 8-bit RGB with an alpha channel
Rgba8,
/// Pixel is 16-bit luminance
L16,
/// Pixel is 16-bit luminance with an alpha channel
La16,
/// Pixel is 16-bit RGB
Rgb16,
/// Pixel is 16-bit RGBA
Rgba16,
/// Pixel is 32-bit float RGB
Rgb32F,
/// Pixel is 32-bit float RGBA
Rgba32F,
}
impl ColorType {
/// Returns the number of bytes contained in a pixel of `ColorType` ```c```
pub fn bytes_per_pixel(self) -> u8 {
match self {
ColorType::L8 => 1,
ColorType::L16 | ColorType::La8 => 2,
ColorType::Rgb8 => 3,
ColorType::Rgba8 | ColorType::La16 => 4,
ColorType::Rgb16 => 6,
ColorType::Rgba16 => 8,
ColorType::Rgb32F => 3 * 4,
ColorType::Rgba32F => 4 * 4,
}
}
/// Returns if there is an alpha channel.
pub fn has_alpha(self) -> bool {
use ColorType::*;
match self {
L8 | L16 | Rgb8 | Rgb16 | Rgb32F => false,
La8 | Rgba8 | La16 | Rgba16 | Rgba32F => true,
}
}
/// Returns false if the color scheme is grayscale, true otherwise.
pub fn has_color(self) -> bool {
use ColorType::*;
match self {
L8 | L16 | La8 | La16 => false,
Rgb8 | Rgb16 | Rgba8 | Rgba16 | Rgb32F | Rgba32F => true,
}
}
/// Returns the number of bits contained in a pixel of `ColorType` ```c``` (which will always be
/// a multiple of 8).
pub fn bits_per_pixel(self) -> u16 {
<u16 as From<u8>>::from(self.bytes_per_pixel()) * 8
}
/// Returns the number of color channels that make up this pixel
pub fn channel_count(self) -> u8 {
let e: ExtendedColorType = self.into();
e.channel_count()
}
}
/// An enumeration of color types encountered in image formats.
///
/// This is not exhaustive over all existing image formats but should be granular enough to allow
/// round tripping of decoding and encoding as much as possible. The variants will be extended as
/// necessary to enable this.
///
/// Another purpose is to advise users of a rough estimate of the accuracy and effort of the
/// decoding from and encoding to such an image format.
#[derive(Copy, PartialEq, Eq, Debug, Clone, Hash)]
#[non_exhaustive]
pub enum ExtendedColorType {
/// Pixel is 8-bit alpha
A8,
/// Pixel is 1-bit luminance
L1,
/// Pixel is 1-bit luminance with an alpha channel
La1,
/// Pixel contains 1-bit R, G and B channels
Rgb1,
/// Pixel is 1-bit RGB with an alpha channel
Rgba1,
/// Pixel is 2-bit luminance
L2,
/// Pixel is 2-bit luminance with an alpha channel
La2,
/// Pixel contains 2-bit R, G and B channels
Rgb2,
/// Pixel is 2-bit RGB with an alpha channel
Rgba2,
/// Pixel is 4-bit luminance
L4,
/// Pixel is 4-bit luminance with an alpha channel
La4,
/// Pixel contains 4-bit R, G and B channels
Rgb4,
/// Pixel is 4-bit RGB with an alpha channel
Rgba4,
/// Pixel is 8-bit luminance
L8,
/// Pixel is 8-bit luminance with an alpha channel
La8,
/// Pixel contains 8-bit R, G and B channels
Rgb8,
/// Pixel is 8-bit RGB with an alpha channel
Rgba8,
/// Pixel is 16-bit luminance
L16,
/// Pixel is 16-bit luminance with an alpha channel
La16,
/// Pixel contains 16-bit R, G and B channels
Rgb16,
/// Pixel is 16-bit RGB with an alpha channel
Rgba16,
/// Pixel contains 8-bit B, G and R channels
Bgr8,
/// Pixel is 8-bit BGR with an alpha channel
Bgra8,
// TODO f16 types?
/// Pixel is 32-bit float RGB
Rgb32F,
/// Pixel is 32-bit float RGBA
Rgba32F,
/// Pixel is of unknown color type with the specified bits per pixel. This can apply to pixels
/// which are associated with an external palette. In that case, the pixel value is an index
/// into the palette.
Unknown(u8),
}
impl ExtendedColorType {
/// Get the number of channels for colors of this type.
///
/// Note that the `Unknown` variant returns a value of `1` since pixels can only be treated as
/// an opaque datum by the library.
pub fn channel_count(self) -> u8 {
match self {
ExtendedColorType::A8
| ExtendedColorType::L1
| ExtendedColorType::L2
| ExtendedColorType::L4
| ExtendedColorType::L8
| ExtendedColorType::L16
| ExtendedColorType::Unknown(_) => 1,
ExtendedColorType::La1
| ExtendedColorType::La2
| ExtendedColorType::La4
| ExtendedColorType::La8
| ExtendedColorType::La16 => 2,
ExtendedColorType::Rgb1
| ExtendedColorType::Rgb2
| ExtendedColorType::Rgb4
| ExtendedColorType::Rgb8
| ExtendedColorType::Rgb16
| ExtendedColorType::Rgb32F
| ExtendedColorType::Bgr8 => 3,
ExtendedColorType::Rgba1
| ExtendedColorType::Rgba2
| ExtendedColorType::Rgba4
| ExtendedColorType::Rgba8
| ExtendedColorType::Rgba16
| ExtendedColorType::Rgba32F
| ExtendedColorType::Bgra8 => 4,
}
}
}
impl From<ColorType> for ExtendedColorType {
fn from(c: ColorType) -> Self {
match c {
ColorType::L8 => ExtendedColorType::L8,
ColorType::La8 => ExtendedColorType::La8,
ColorType::Rgb8 => ExtendedColorType::Rgb8,
ColorType::Rgba8 => ExtendedColorType::Rgba8,
ColorType::L16 => ExtendedColorType::L16,
ColorType::La16 => ExtendedColorType::La16,
ColorType::Rgb16 => ExtendedColorType::Rgb16,
ColorType::Rgba16 => ExtendedColorType::Rgba16,
ColorType::Rgb32F => ExtendedColorType::Rgb32F,
ColorType::Rgba32F => ExtendedColorType::Rgba32F,
}
}
}
macro_rules! define_colors {
{$(
$(#[$doc:meta])*
pub struct $ident:ident<T: $($bound:ident)*>([T; $channels:expr, $alphas:expr])
= $interpretation:literal;
)*} => {
$( // START Structure definitions
$(#[$doc])*
#[derive(PartialEq, Eq, Clone, Debug, Copy, Hash)]
#[repr(C)]
#[allow(missing_docs)]
pub struct $ident<T> (pub [T; $channels]);
impl<T: $($bound+)*> Pixel for $ident<T> {
type Subpixel = T;
const CHANNEL_COUNT: u8 = $channels;
#[inline(always)]
fn channels(&self) -> &[T] {
&self.0
}
#[inline(always)]
fn channels_mut(&mut self) -> &mut [T] {
&mut self.0
}
const COLOR_MODEL: &'static str = $interpretation;
fn channels4(&self) -> (T, T, T, T) {
const CHANNELS: usize = $channels;
let mut channels = [T::DEFAULT_MAX_VALUE; 4];
channels[0..CHANNELS].copy_from_slice(&self.0);
(channels[0], channels[1], channels[2], channels[3])
}
fn from_channels(a: T, b: T, c: T, d: T,) -> $ident<T> {
const CHANNELS: usize = $channels;
*<$ident<T> as Pixel>::from_slice(&[a, b, c, d][..CHANNELS])
}
fn from_slice(slice: &[T]) -> &$ident<T> {
assert_eq!(slice.len(), $channels);
unsafe { &*(slice.as_ptr() as *const $ident<T>) }
}
fn from_slice_mut(slice: &mut [T]) -> &mut $ident<T> {
assert_eq!(slice.len(), $channels);
unsafe { &mut *(slice.as_mut_ptr() as *mut $ident<T>) }
}
fn to_rgb(&self) -> Rgb<T> {
let mut pix = Rgb([Zero::zero(), Zero::zero(), Zero::zero()]);
pix.from_color(self);
pix
}
fn to_rgba(&self) -> Rgba<T> {
let mut pix = Rgba([Zero::zero(), Zero::zero(), Zero::zero(), Zero::zero()]);
pix.from_color(self);
pix
}
fn to_luma(&self) -> Luma<T> {
let mut pix = Luma([Zero::zero()]);
pix.from_color(self);
pix
}
fn to_luma_alpha(&self) -> LumaA<T> {
let mut pix = LumaA([Zero::zero(), Zero::zero()]);
pix.from_color(self);
pix
}
fn map<F>(& self, f: F) -> $ident<T> where F: FnMut(T) -> T {
let mut this = (*self).clone();
this.apply(f);
this
}
fn apply<F>(&mut self, mut f: F) where F: FnMut(T) -> T {
for v in &mut self.0 {
*v = f(*v)
}
}
fn map_with_alpha<F, G>(&self, f: F, g: G) -> $ident<T> where F: FnMut(T) -> T, G: FnMut(T) -> T {
let mut this = (*self).clone();
this.apply_with_alpha(f, g);
this
}
fn apply_with_alpha<F, G>(&mut self, mut f: F, mut g: G) where F: FnMut(T) -> T, G: FnMut(T) -> T {
const ALPHA: usize = $channels - $alphas;
for v in self.0[..ALPHA].iter_mut() {
*v = f(*v)
}
// The branch of this match is `const`. This way ensures that no subexpression fails the
// `const_err` lint (the expression `self.0[ALPHA]` would).
if let Some(v) = self.0.get_mut(ALPHA) {
*v = g(*v)
}
}
fn map2<F>(&self, other: &Self, f: F) -> $ident<T> where F: FnMut(T, T) -> T {
let mut this = (*self).clone();
this.apply2(other, f);
this
}
fn apply2<F>(&mut self, other: &$ident<T>, mut f: F) where F: FnMut(T, T) -> T {
for (a, &b) in self.0.iter_mut().zip(other.0.iter()) {
*a = f(*a, b)
}
}
fn invert(&mut self) {
Invert::invert(self)
}
fn blend(&mut self, other: &$ident<T>) {
Blend::blend(self, other)
}
}
impl<T> Index<usize> for $ident<T> {
type Output = T;
#[inline(always)]
fn index(&self, _index: usize) -> &T {
&self.0[_index]
}
}
impl<T> IndexMut<usize> for $ident<T> {
#[inline(always)]
fn index_mut(&mut self, _index: usize) -> &mut T {
&mut self.0[_index]
}
}
impl<T> From<[T; $channels]> for $ident<T> {
fn from(c: [T; $channels]) -> Self {
Self(c)
}
}
)* // END Structure definitions
}
}
define_colors! {
/// RGB colors.
///
/// For the purpose of color conversion, as well as blending, the implementation of `Pixel`
/// assumes an `sRGB` color space of its data.
pub struct Rgb<T: Primitive Enlargeable>([T; 3, 0]) = "RGB";
/// Grayscale colors.
pub struct Luma<T: Primitive>([T; 1, 0]) = "Y";
/// RGB colors + alpha channel
pub struct Rgba<T: Primitive Enlargeable>([T; 4, 1]) = "RGBA";
/// Grayscale colors + alpha channel
pub struct LumaA<T: Primitive>([T; 2, 1]) = "YA";
}
/// Convert from one pixel component type to another. For example, convert from `u8` to `f32` pixel values.
pub trait FromPrimitive<Component> {
/// Converts from any pixel component type to this type.
fn from_primitive(component: Component) -> Self;
}
impl<T: Primitive> FromPrimitive<T> for T {
fn from_primitive(sample: T) -> Self {
sample
}
}
// from f32:
// Note that in to-integer-conversion we are performing rounding but NumCast::from is implemented
// as truncate towards zero. We emulate rounding by adding a bias.
impl FromPrimitive<f32> for u8 {
fn from_primitive(float: f32) -> Self {
let inner = (float.clamp(0.0, 1.0) * u8::MAX as f32).round();
NumCast::from(inner).unwrap()
}
}
impl FromPrimitive<f32> for u16 {
fn from_primitive(float: f32) -> Self {
let inner = (float.clamp(0.0, 1.0) * u16::MAX as f32).round();
NumCast::from(inner).unwrap()
}
}
// from u16:
impl FromPrimitive<u16> for u8 {
fn from_primitive(c16: u16) -> Self {
fn from(c: impl Into<u32>) -> u32 {
c.into()
}
// The input c is the numerator of `c / u16::MAX`.
// Derive numerator of `num / u8::MAX`, with rounding.
//
// This method is based on the inverse (see FromPrimitive<u8> for u16) and was tested
// exhaustively in Python. It's the same as the reference function:
// round(c * (2**8 - 1) / (2**16 - 1))
NumCast::from((from(c16) + 128) / 257).unwrap()
}
}
impl FromPrimitive<u16> for f32 {
fn from_primitive(int: u16) -> Self {
(int as f32 / u16::MAX as f32).clamp(0.0, 1.0)
}
}
// from u8:
impl FromPrimitive<u8> for f32 {
fn from_primitive(int: u8) -> Self {
(int as f32 / u8::MAX as f32).clamp(0.0, 1.0)
}
}
impl FromPrimitive<u8> for u16 {
fn from_primitive(c8: u8) -> Self {
let x = c8.to_u64().unwrap();
NumCast::from((x << 8) | x).unwrap()
}
}
/// Provides color conversions for the different pixel types.
pub trait FromColor<Other> {
/// Changes `self` to represent `Other` in the color space of `Self`
fn from_color(&mut self, _: &Other);
}
/// Copy-based conversions to target pixel types using `FromColor`.
// FIXME: this trait should be removed and replaced with real color space models
// rather than assuming sRGB.
pub(crate) trait IntoColor<Other> {
/// Constructs a pixel of the target type and converts this pixel into it.
fn into_color(&self) -> Other;
}
impl<O, S> IntoColor<O> for S
where
O: Pixel + FromColor<S>,
{
fn into_color(&self) -> O {
// Note we cannot use Pixel::CHANNELS_COUNT here to directly construct
// the pixel due to a current bug/limitation of consts.
#[allow(deprecated)]
let mut pix = O::from_channels(Zero::zero(), Zero::zero(), Zero::zero(), Zero::zero());
pix.from_color(self);
pix
}
}
/// Coefficients to transform from sRGB to a CIE Y (luminance) value.
const SRGB_LUMA: [u32; 3] = [2126, 7152, 722];
const SRGB_LUMA_DIV: u32 = 10000;
#[inline]
fn rgb_to_luma<T: Primitive + Enlargeable>(rgb: &[T]) -> T {
let l = <T::Larger as NumCast>::from(SRGB_LUMA[0]).unwrap() * rgb[0].to_larger()
+ <T::Larger as NumCast>::from(SRGB_LUMA[1]).unwrap() * rgb[1].to_larger()
+ <T::Larger as NumCast>::from(SRGB_LUMA[2]).unwrap() * rgb[2].to_larger();
T::clamp_from(l / <T::Larger as NumCast>::from(SRGB_LUMA_DIV).unwrap())
}
// `FromColor` for Luma
impl<S: Primitive, T: Primitive> FromColor<Luma<S>> for Luma<T>
where
T: FromPrimitive<S>,
{
fn from_color(&mut self, other: &Luma<S>) {
let own = self.channels_mut();
let other = other.channels();
own[0] = T::from_primitive(other[0]);
}
}
impl<S: Primitive, T: Primitive> FromColor<LumaA<S>> for Luma<T>
where
T: FromPrimitive<S>,
{
fn from_color(&mut self, other: &LumaA<S>) {
self.channels_mut()[0] = T::from_primitive(other.channels()[0])
}
}
impl<S: Primitive + Enlargeable, T: Primitive> FromColor<Rgb<S>> for Luma<T>
where
T: FromPrimitive<S>,
{
fn from_color(&mut self, other: &Rgb<S>) {
let gray = self.channels_mut();
let rgb = other.channels();
gray[0] = T::from_primitive(rgb_to_luma(rgb));
}
}
impl<S: Primitive + Enlargeable, T: Primitive> FromColor<Rgba<S>> for Luma<T>
where
T: FromPrimitive<S>,
{
fn from_color(&mut self, other: &Rgba<S>) {
let gray = self.channels_mut();
let rgb = other.channels();
let l = rgb_to_luma(rgb);
gray[0] = T::from_primitive(l);
}
}
// `FromColor` for LumaA
impl<S: Primitive, T: Primitive> FromColor<LumaA<S>> for LumaA<T>
where
T: FromPrimitive<S>,
{
fn from_color(&mut self, other: &LumaA<S>) {
let own = self.channels_mut();
let other = other.channels();
own[0] = T::from_primitive(other[0]);
own[1] = T::from_primitive(other[1]);
}
}
impl<S: Primitive + Enlargeable, T: Primitive> FromColor<Rgb<S>> for LumaA<T>
where
T: FromPrimitive<S>,
{
fn from_color(&mut self, other: &Rgb<S>) {
let gray_a = self.channels_mut();
let rgb = other.channels();
gray_a[0] = T::from_primitive(rgb_to_luma(rgb));
gray_a[1] = T::DEFAULT_MAX_VALUE;
}
}
impl<S: Primitive + Enlargeable, T: Primitive> FromColor<Rgba<S>> for LumaA<T>
where
T: FromPrimitive<S>,
{
fn from_color(&mut self, other: &Rgba<S>) {
let gray_a = self.channels_mut();
let rgba = other.channels();
gray_a[0] = T::from_primitive(rgb_to_luma(rgba));
gray_a[1] = T::from_primitive(rgba[3]);
}
}
impl<S: Primitive, T: Primitive> FromColor<Luma<S>> for LumaA<T>
where
T: FromPrimitive<S>,
{
fn from_color(&mut self, other: &Luma<S>) {
let gray_a = self.channels_mut();
gray_a[0] = T::from_primitive(other.channels()[0]);
gray_a[1] = T::DEFAULT_MAX_VALUE;
}
}
// `FromColor` for RGBA
impl<S: Primitive, T: Primitive> FromColor<Rgba<S>> for Rgba<T>
where
T: FromPrimitive<S>,
{
fn from_color(&mut self, other: &Rgba<S>) {
let own = &mut self.0;
let other = &other.0;
own[0] = T::from_primitive(other[0]);
own[1] = T::from_primitive(other[1]);
own[2] = T::from_primitive(other[2]);
own[3] = T::from_primitive(other[3]);
}
}
impl<S: Primitive, T: Primitive> FromColor<Rgb<S>> for Rgba<T>
where
T: FromPrimitive<S>,
{
fn from_color(&mut self, other: &Rgb<S>) {
let rgba = &mut self.0;
let rgb = &other.0;
rgba[0] = T::from_primitive(rgb[0]);
rgba[1] = T::from_primitive(rgb[1]);
rgba[2] = T::from_primitive(rgb[2]);
rgba[3] = T::DEFAULT_MAX_VALUE;
}
}
impl<S: Primitive, T: Primitive> FromColor<LumaA<S>> for Rgba<T>
where
T: FromPrimitive<S>,
{
fn from_color(&mut self, gray: &LumaA<S>) {
let rgba = &mut self.0;
let gray = &gray.0;
rgba[0] = T::from_primitive(gray[0]);
rgba[1] = T::from_primitive(gray[0]);
rgba[2] = T::from_primitive(gray[0]);
rgba[3] = T::from_primitive(gray[1]);
}
}
impl<S: Primitive, T: Primitive> FromColor<Luma<S>> for Rgba<T>
where
T: FromPrimitive<S>,
{
fn from_color(&mut self, gray: &Luma<S>) {
let rgba = &mut self.0;
let gray = gray.0[0];
rgba[0] = T::from_primitive(gray);
rgba[1] = T::from_primitive(gray);
rgba[2] = T::from_primitive(gray);
rgba[3] = T::DEFAULT_MAX_VALUE;
}
}
// `FromColor` for RGB
impl<S: Primitive, T: Primitive> FromColor<Rgb<S>> for Rgb<T>
where
T: FromPrimitive<S>,
{
fn from_color(&mut self, other: &Rgb<S>) {
let own = &mut self.0;
let other = &other.0;
own[0] = T::from_primitive(other[0]);
own[1] = T::from_primitive(other[1]);
own[2] = T::from_primitive(other[2]);
}
}
impl<S: Primitive, T: Primitive> FromColor<Rgba<S>> for Rgb<T>
where
T: FromPrimitive<S>,
{
fn from_color(&mut self, other: &Rgba<S>) {
let rgb = &mut self.0;
let rgba = &other.0;
rgb[0] = T::from_primitive(rgba[0]);
rgb[1] = T::from_primitive(rgba[1]);
rgb[2] = T::from_primitive(rgba[2]);
}
}
impl<S: Primitive, T: Primitive> FromColor<LumaA<S>> for Rgb<T>
where
T: FromPrimitive<S>,
{
fn from_color(&mut self, other: &LumaA<S>) {
let rgb = &mut self.0;
let gray = other.0[0];
rgb[0] = T::from_primitive(gray);
rgb[1] = T::from_primitive(gray);
rgb[2] = T::from_primitive(gray);
}
}
impl<S: Primitive, T: Primitive> FromColor<Luma<S>> for Rgb<T>
where
T: FromPrimitive<S>,
{
fn from_color(&mut self, other: &Luma<S>) {
let rgb = &mut self.0;
let gray = other.0[0];
rgb[0] = T::from_primitive(gray);
rgb[1] = T::from_primitive(gray);
rgb[2] = T::from_primitive(gray);
}
}
/// Blends a color inter another one
pub(crate) trait Blend {
/// Blends a color in-place.
fn blend(&mut self, other: &Self);
}
impl<T: Primitive> Blend for LumaA<T> {
fn blend(&mut self, other: &LumaA<T>) {
let max_t = T::DEFAULT_MAX_VALUE;
let max_t = max_t.to_f32().unwrap();
let (bg_luma, bg_a) = (self.0[0], self.0[1]);
let (fg_luma, fg_a) = (other.0[0], other.0[1]);
let (bg_luma, bg_a) = (
bg_luma.to_f32().unwrap() / max_t,
bg_a.to_f32().unwrap() / max_t,
);
let (fg_luma, fg_a) = (
fg_luma.to_f32().unwrap() / max_t,
fg_a.to_f32().unwrap() / max_t,
);
let alpha_final = bg_a + fg_a - bg_a * fg_a;
if alpha_final == 0.0 {
return;
};
let bg_luma_a = bg_luma * bg_a;
let fg_luma_a = fg_luma * fg_a;
let out_luma_a = fg_luma_a + bg_luma_a * (1.0 - fg_a);
let out_luma = out_luma_a / alpha_final;
*self = LumaA([
NumCast::from(max_t * out_luma).unwrap(),
NumCast::from(max_t * alpha_final).unwrap(),
])
}
}
impl<T: Primitive> Blend for Luma<T> {
fn blend(&mut self, other: &Luma<T>) {
*self = *other
}
}
impl<T: Primitive> Blend for Rgba<T> {
fn blend(&mut self, other: &Rgba<T>) {
// http://stackoverflow.com/questions/7438263/alpha-compositing-algorithm-blend-modes#answer-11163848
if other.0[3].is_zero() {
return;
}
if other.0[3] == T::DEFAULT_MAX_VALUE {
*self = *other;
return;
}
// First, as we don't know what type our pixel is, we have to convert to floats between 0.0 and 1.0
let max_t = T::DEFAULT_MAX_VALUE;
let max_t = max_t.to_f32().unwrap();
let (bg_r, bg_g, bg_b, bg_a) = (self.0[0], self.0[1], self.0[2], self.0[3]);
let (fg_r, fg_g, fg_b, fg_a) = (other.0[0], other.0[1], other.0[2], other.0[3]);
let (bg_r, bg_g, bg_b, bg_a) = (
bg_r.to_f32().unwrap() / max_t,
bg_g.to_f32().unwrap() / max_t,
bg_b.to_f32().unwrap() / max_t,
bg_a.to_f32().unwrap() / max_t,
);
let (fg_r, fg_g, fg_b, fg_a) = (
fg_r.to_f32().unwrap() / max_t,
fg_g.to_f32().unwrap() / max_t,
fg_b.to_f32().unwrap() / max_t,
fg_a.to_f32().unwrap() / max_t,
);
// Work out what the final alpha level will be
let alpha_final = bg_a + fg_a - bg_a * fg_a;
if alpha_final == 0.0 {
return;
};
// We premultiply our channels by their alpha, as this makes it easier to calculate
let (bg_r_a, bg_g_a, bg_b_a) = (bg_r * bg_a, bg_g * bg_a, bg_b * bg_a);
let (fg_r_a, fg_g_a, fg_b_a) = (fg_r * fg_a, fg_g * fg_a, fg_b * fg_a);
// Standard formula for src-over alpha compositing
let (out_r_a, out_g_a, out_b_a) = (
fg_r_a + bg_r_a * (1.0 - fg_a),
fg_g_a + bg_g_a * (1.0 - fg_a),
fg_b_a + bg_b_a * (1.0 - fg_a),
);
// Unmultiply the channels by our resultant alpha channel
let (out_r, out_g, out_b) = (
out_r_a / alpha_final,
out_g_a / alpha_final,
out_b_a / alpha_final,
);
// Cast back to our initial type on return
*self = Rgba([
NumCast::from(max_t * out_r).unwrap(),
NumCast::from(max_t * out_g).unwrap(),
NumCast::from(max_t * out_b).unwrap(),
NumCast::from(max_t * alpha_final).unwrap(),
])
}
}
impl<T: Primitive> Blend for Rgb<T> {
fn blend(&mut self, other: &Rgb<T>) {
*self = *other
}
}
/// Invert a color
pub(crate) trait Invert {
/// Inverts a color in-place.
fn invert(&mut self);
}
impl<T: Primitive> Invert for LumaA<T> {
fn invert(&mut self) {
let l = self.0;
let max = T::DEFAULT_MAX_VALUE;
*self = LumaA([max - l[0], l[1]])
}
}
impl<T: Primitive> Invert for Luma<T> {
fn invert(&mut self) {
let l = self.0;
let max = T::DEFAULT_MAX_VALUE;
let l1 = max - l[0];
*self = Luma([l1])
}
}
impl<T: Primitive> Invert for Rgba<T> {
fn invert(&mut self) {
let rgba = self.0;
let max = T::DEFAULT_MAX_VALUE;
*self = Rgba([max - rgba[0], max - rgba[1], max - rgba[2], rgba[3]])
}
}
impl<T: Primitive> Invert for Rgb<T> {
fn invert(&mut self) {
let rgb = self.0;
let max = T::DEFAULT_MAX_VALUE;
let r1 = max - rgb[0];
let g1 = max - rgb[1];
let b1 = max - rgb[2];
*self = Rgb([r1, g1, b1])
}
}
#[cfg(test)]
mod tests {
use super::{Luma, LumaA, Pixel, Rgb, Rgba};
#[test]
fn test_apply_with_alpha_rgba() {
let mut rgba = Rgba([0, 0, 0, 0]);
rgba.apply_with_alpha(|s| s, |_| 0xFF);
assert_eq!(rgba, Rgba([0, 0, 0, 0xFF]));
}
#[test]
fn test_apply_with_alpha_rgb() {
let mut rgb = Rgb([0, 0, 0]);
rgb.apply_with_alpha(|s| s, |_| panic!("bug"));
assert_eq!(rgb, Rgb([0, 0, 0]));
}
#[test]
fn test_map_with_alpha_rgba() {
let rgba = Rgba([0, 0, 0, 0]).map_with_alpha(|s| s, |_| 0xFF);
assert_eq!(rgba, Rgba([0, 0, 0, 0xFF]));
}
#[test]
fn test_map_with_alpha_rgb() {
let rgb = Rgb([0, 0, 0]).map_with_alpha(|s| s, |_| panic!("bug"));
assert_eq!(rgb, Rgb([0, 0, 0]));
}
#[test]
fn test_blend_luma_alpha() {
let ref mut a = LumaA([255 as u8, 255]);
let b = LumaA([255 as u8, 255]);
a.blend(&b);
assert_eq!(a.0[0], 255);
assert_eq!(a.0[1], 255);
let ref mut a = LumaA([255 as u8, 0]);
let b = LumaA([255 as u8, 255]);
a.blend(&b);
assert_eq!(a.0[0], 255);
assert_eq!(a.0[1], 255);
let ref mut a = LumaA([255 as u8, 255]);
let b = LumaA([255 as u8, 0]);
a.blend(&b);
assert_eq!(a.0[0], 255);
assert_eq!(a.0[1], 255);
let ref mut a = LumaA([255 as u8, 0]);
let b = LumaA([255 as u8, 0]);
a.blend(&b);
assert_eq!(a.0[0], 255);
assert_eq!(a.0[1], 0);
}
#[test]
fn test_blend_rgba() {
let ref mut a = Rgba([255 as u8, 255, 255, 255]);
let b = Rgba([255 as u8, 255, 255, 255]);
a.blend(&b);
assert_eq!(a.0, [255, 255, 255, 255]);
let ref mut a = Rgba([255 as u8, 255, 255, 0]);
let b = Rgba([255 as u8, 255, 255, 255]);
a.blend(&b);
assert_eq!(a.0, [255, 255, 255, 255]);
let ref mut a = Rgba([255 as u8, 255, 255, 255]);
let b = Rgba([255 as u8, 255, 255, 0]);
a.blend(&b);
assert_eq!(a.0, [255, 255, 255, 255]);
let ref mut a = Rgba([255 as u8, 255, 255, 0]);
let b = Rgba([255 as u8, 255, 255, 0]);
a.blend(&b);
assert_eq!(a.0, [255, 255, 255, 0]);
}
#[test]
fn test_apply_without_alpha_rgba() {
let mut rgba = Rgba([0, 0, 0, 0]);
rgba.apply_without_alpha(|s| s + 1);
assert_eq!(rgba, Rgba([1, 1, 1, 0]));
}
#[test]
fn test_apply_without_alpha_rgb() {
let mut rgb = Rgb([0, 0, 0]);
rgb.apply_without_alpha(|s| s + 1);
assert_eq!(rgb, Rgb([1, 1, 1]));
}
#[test]
fn test_map_without_alpha_rgba() {
let rgba = Rgba([0, 0, 0, 0]).map_without_alpha(|s| s + 1);
assert_eq!(rgba, Rgba([1, 1, 1, 0]));
}
#[test]
fn test_map_without_alpha_rgb() {
let rgb = Rgb([0, 0, 0]).map_without_alpha(|s| s + 1);
assert_eq!(rgb, Rgb([1, 1, 1]));
}
macro_rules! test_lossless_conversion {
($a:ty, $b:ty, $c:ty) => {
let a: $a = [<$a as Pixel>::Subpixel::DEFAULT_MAX_VALUE >> 2;
<$a as Pixel>::CHANNEL_COUNT as usize]
.into();
let b: $b = a.into_color();
let c: $c = b.into_color();
assert_eq!(a.channels(), c.channels());
};
}
#[test]
fn test_lossless_conversions() {
use super::IntoColor;
use crate::traits::Primitive;
test_lossless_conversion!(Luma<u8>, Luma<u16>, Luma<u8>);
test_lossless_conversion!(LumaA<u8>, LumaA<u16>, LumaA<u8>);
test_lossless_conversion!(Rgb<u8>, Rgb<u16>, Rgb<u8>);
test_lossless_conversion!(Rgba<u8>, Rgba<u16>, Rgba<u8>);
}
#[test]
fn accuracy_conversion() {
use super::{Luma, Pixel, Rgb};
let pixel = Rgb::from([13, 13, 13]);
let Luma([luma]) = pixel.to_luma();
assert_eq!(luma, 13);
}
}

1353
vendor/image/src/dynimage.rs vendored Normal file

File diff suppressed because it is too large Load Diff

506
vendor/image/src/error.rs vendored Normal file
View File

@@ -0,0 +1,506 @@
//! Contains detailed error representation.
//!
//! See the main [`ImageError`] which contains a variant for each specialized error type. The
//! subtypes used in each variant are opaque by design. They can be roughly inspected through their
//! respective `kind` methods which work similar to `std::io::Error::kind`.
//!
//! The error interface makes it possible to inspect the error of an underlying decoder or encoder,
//! through the `Error::source` method. Note that this is not part of the stable interface and you
//! may not rely on a particular error value for a particular operation. This means mainly that
//! `image` does not promise to remain on a particular version of its underlying decoders but if
//! you ensure to use the same version of the dependency (or at least of the error type) through
//! external means then you could inspect the error type in slightly more detail.
//!
//! [`ImageError`]: enum.ImageError.html
use std::error::Error;
use std::{fmt, io};
use crate::color::ExtendedColorType;
use crate::image::ImageFormat;
/// The generic error type for image operations.
///
/// This high level enum allows, by variant matching, a rough separation of concerns between
/// underlying IO, the caller, format specifications, and the `image` implementation.
#[derive(Debug)]
pub enum ImageError {
/// An error was encountered while decoding.
///
/// This means that the input data did not conform to the specification of some image format,
/// or that no format could be determined, or that it did not match format specific
/// requirements set by the caller.
Decoding(DecodingError),
/// An error was encountered while encoding.
///
/// The input image can not be encoded with the chosen format, for example because the
/// specification has no representation for its color space or because a necessary conversion
/// is ambiguous. In some cases it might also happen that the dimensions can not be used with
/// the format.
Encoding(EncodingError),
/// An error was encountered in input arguments.
///
/// This is a catch-all case for strictly internal operations such as scaling, conversions,
/// etc. that involve no external format specifications.
Parameter(ParameterError),
/// Completing the operation would have required more resources than allowed.
///
/// Errors of this type are limits set by the user or environment, *not* inherent in a specific
/// format or operation that was executed.
Limits(LimitError),
/// An operation can not be completed by the chosen abstraction.
///
/// This means that it might be possible for the operation to succeed in general but
/// * it requires a disabled feature,
/// * the implementation does not yet exist, or
/// * no abstraction for a lower level could be found.
Unsupported(UnsupportedError),
/// An error occurred while interacting with the environment.
IoError(io::Error),
}
/// The implementation for an operation was not provided.
///
/// See the variant [`Unsupported`] for more documentation.
///
/// [`Unsupported`]: enum.ImageError.html#variant.Unsupported
#[derive(Debug)]
pub struct UnsupportedError {
format: ImageFormatHint,
kind: UnsupportedErrorKind,
}
/// Details what feature is not supported.
#[derive(Clone, Debug, Hash, PartialEq)]
#[non_exhaustive]
pub enum UnsupportedErrorKind {
/// The required color type can not be handled.
Color(ExtendedColorType),
/// An image format is not supported.
Format(ImageFormatHint),
/// Some feature specified by string.
/// This is discouraged and is likely to get deprecated (but not removed).
GenericFeature(String),
}
/// An error was encountered while encoding an image.
///
/// This is used as an opaque representation for the [`ImageError::Encoding`] variant. See its
/// documentation for more information.
///
/// [`ImageError::Encoding`]: enum.ImageError.html#variant.Encoding
#[derive(Debug)]
pub struct EncodingError {
format: ImageFormatHint,
underlying: Option<Box<dyn Error + Send + Sync>>,
}
/// An error was encountered in inputs arguments.
///
/// This is used as an opaque representation for the [`ImageError::Parameter`] variant. See its
/// documentation for more information.
///
/// [`ImageError::Parameter`]: enum.ImageError.html#variant.Parameter
#[derive(Debug)]
pub struct ParameterError {
kind: ParameterErrorKind,
underlying: Option<Box<dyn Error + Send + Sync>>,
}
/// Details how a parameter is malformed.
#[derive(Clone, Debug, Hash, PartialEq)]
#[non_exhaustive]
pub enum ParameterErrorKind {
/// The dimensions passed are wrong.
DimensionMismatch,
/// Repeated an operation for which error that could not be cloned was emitted already.
FailedAlready,
/// A string describing the parameter.
/// This is discouraged and is likely to get deprecated (but not removed).
Generic(String),
/// The end of the image has been reached.
NoMoreData,
}
/// An error was encountered while decoding an image.
///
/// This is used as an opaque representation for the [`ImageError::Decoding`] variant. See its
/// documentation for more information.
///
/// [`ImageError::Decoding`]: enum.ImageError.html#variant.Decoding
#[derive(Debug)]
pub struct DecodingError {
format: ImageFormatHint,
underlying: Option<Box<dyn Error + Send + Sync>>,
}
/// Completing the operation would have required more resources than allowed.
///
/// This is used as an opaque representation for the [`ImageError::Limits`] variant. See its
/// documentation for more information.
///
/// [`ImageError::Limits`]: enum.ImageError.html#variant.Limits
#[derive(Debug)]
pub struct LimitError {
kind: LimitErrorKind,
// do we need an underlying error?
}
/// Indicates the limit that prevented an operation from completing.
///
/// Note that this enumeration is not exhaustive and may in the future be extended to provide more
/// detailed information or to incorporate other resources types.
#[derive(Clone, Debug, Hash, PartialEq, Eq)]
#[non_exhaustive]
#[allow(missing_copy_implementations)] // Might be non-Copy in the future.
pub enum LimitErrorKind {
/// The resulting image exceed dimension limits in either direction.
DimensionError,
/// The operation would have performed an allocation larger than allowed.
InsufficientMemory,
/// The specified strict limits are not supported for this operation
Unsupported {
/// The given limits
limits: crate::io::Limits,
/// The supported strict limits
supported: crate::io::LimitSupport,
},
}
/// A best effort representation for image formats.
#[derive(Clone, Debug, Hash, PartialEq)]
#[non_exhaustive]
pub enum ImageFormatHint {
/// The format is known exactly.
Exact(ImageFormat),
/// The format can be identified by a name.
Name(String),
/// A common path extension for the format is known.
PathExtension(std::path::PathBuf),
/// The format is not known or could not be determined.
Unknown,
}
impl UnsupportedError {
/// Create an `UnsupportedError` for an image with details on the unsupported feature.
///
/// If the operation was not connected to a particular image format then the hint may be
/// `Unknown`.
pub fn from_format_and_kind(format: ImageFormatHint, kind: UnsupportedErrorKind) -> Self {
UnsupportedError { format, kind }
}
/// Returns the corresponding `UnsupportedErrorKind` of the error.
pub fn kind(&self) -> UnsupportedErrorKind {
self.kind.clone()
}
/// Returns the image format associated with this error.
pub fn format_hint(&self) -> ImageFormatHint {
self.format.clone()
}
}
impl DecodingError {
/// Create a `DecodingError` that stems from an arbitrary error of an underlying decoder.
pub fn new(format: ImageFormatHint, err: impl Into<Box<dyn Error + Send + Sync>>) -> Self {
DecodingError {
format,
underlying: Some(err.into()),
}
}
/// Create a `DecodingError` for an image format.
///
/// The error will not contain any further information but is very easy to create.
pub fn from_format_hint(format: ImageFormatHint) -> Self {
DecodingError {
format,
underlying: None,
}
}
/// Returns the image format associated with this error.
pub fn format_hint(&self) -> ImageFormatHint {
self.format.clone()
}
}
impl EncodingError {
/// Create an `EncodingError` that stems from an arbitrary error of an underlying encoder.
pub fn new(format: ImageFormatHint, err: impl Into<Box<dyn Error + Send + Sync>>) -> Self {
EncodingError {
format,
underlying: Some(err.into()),
}
}
/// Create an `EncodingError` for an image format.
///
/// The error will not contain any further information but is very easy to create.
pub fn from_format_hint(format: ImageFormatHint) -> Self {
EncodingError {
format,
underlying: None,
}
}
/// Return the image format associated with this error.
pub fn format_hint(&self) -> ImageFormatHint {
self.format.clone()
}
}
impl ParameterError {
/// Construct a `ParameterError` directly from a corresponding kind.
pub fn from_kind(kind: ParameterErrorKind) -> Self {
ParameterError {
kind,
underlying: None,
}
}
/// Returns the corresponding `ParameterErrorKind` of the error.
pub fn kind(&self) -> ParameterErrorKind {
self.kind.clone()
}
}
impl LimitError {
/// Construct a generic `LimitError` directly from a corresponding kind.
pub fn from_kind(kind: LimitErrorKind) -> Self {
LimitError { kind }
}
/// Returns the corresponding `LimitErrorKind` of the error.
pub fn kind(&self) -> LimitErrorKind {
self.kind.clone()
}
}
impl From<io::Error> for ImageError {
fn from(err: io::Error) -> ImageError {
ImageError::IoError(err)
}
}
impl From<ImageFormat> for ImageFormatHint {
fn from(format: ImageFormat) -> Self {
ImageFormatHint::Exact(format)
}
}
impl From<&'_ std::path::Path> for ImageFormatHint {
fn from(path: &'_ std::path::Path) -> Self {
match path.extension() {
Some(ext) => ImageFormatHint::PathExtension(ext.into()),
None => ImageFormatHint::Unknown,
}
}
}
impl From<ImageFormatHint> for UnsupportedError {
fn from(hint: ImageFormatHint) -> Self {
UnsupportedError {
format: hint.clone(),
kind: UnsupportedErrorKind::Format(hint),
}
}
}
/// Result of an image decoding/encoding process
pub type ImageResult<T> = Result<T, ImageError>;
impl fmt::Display for ImageError {
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
match self {
ImageError::IoError(err) => err.fmt(fmt),
ImageError::Decoding(err) => err.fmt(fmt),
ImageError::Encoding(err) => err.fmt(fmt),
ImageError::Parameter(err) => err.fmt(fmt),
ImageError::Limits(err) => err.fmt(fmt),
ImageError::Unsupported(err) => err.fmt(fmt),
}
}
}
impl Error for ImageError {
fn source(&self) -> Option<&(dyn Error + 'static)> {
match self {
ImageError::IoError(err) => err.source(),
ImageError::Decoding(err) => err.source(),
ImageError::Encoding(err) => err.source(),
ImageError::Parameter(err) => err.source(),
ImageError::Limits(err) => err.source(),
ImageError::Unsupported(err) => err.source(),
}
}
}
impl fmt::Display for UnsupportedError {
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
match &self.kind {
UnsupportedErrorKind::Format(ImageFormatHint::Unknown) => {
write!(fmt, "The image format could not be determined",)
}
UnsupportedErrorKind::Format(format @ ImageFormatHint::PathExtension(_)) => write!(
fmt,
"The file extension {} was not recognized as an image format",
format,
),
UnsupportedErrorKind::Format(format) => {
write!(fmt, "The image format {} is not supported", format,)
}
UnsupportedErrorKind::Color(color) => write!(
fmt,
"The decoder for {} does not support the color type `{:?}`",
self.format, color,
),
UnsupportedErrorKind::GenericFeature(message) => match &self.format {
ImageFormatHint::Unknown => write!(
fmt,
"The decoder does not support the format feature {}",
message,
),
other => write!(
fmt,
"The decoder for {} does not support the format features {}",
other, message,
),
},
}
}
}
impl Error for UnsupportedError {}
impl fmt::Display for ParameterError {
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
match &self.kind {
ParameterErrorKind::DimensionMismatch => write!(
fmt,
"The Image's dimensions are either too \
small or too large"
),
ParameterErrorKind::FailedAlready => write!(
fmt,
"The end the image stream has been reached due to a previous error"
),
ParameterErrorKind::Generic(message) => {
write!(fmt, "The parameter is malformed: {}", message,)
}
ParameterErrorKind::NoMoreData => write!(fmt, "The end of the image has been reached",),
}?;
if let Some(underlying) = &self.underlying {
write!(fmt, "\n{}", underlying)?;
}
Ok(())
}
}
impl Error for ParameterError {
fn source(&self) -> Option<&(dyn Error + 'static)> {
match &self.underlying {
None => None,
Some(source) => Some(&**source),
}
}
}
impl fmt::Display for EncodingError {
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
match &self.underlying {
Some(underlying) => write!(
fmt,
"Format error encoding {}:\n{}",
self.format, underlying,
),
None => write!(fmt, "Format error encoding {}", self.format,),
}
}
}
impl Error for EncodingError {
fn source(&self) -> Option<&(dyn Error + 'static)> {
match &self.underlying {
None => None,
Some(source) => Some(&**source),
}
}
}
impl fmt::Display for DecodingError {
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
match &self.underlying {
None => match self.format {
ImageFormatHint::Unknown => write!(fmt, "Format error"),
_ => write!(fmt, "Format error decoding {}", self.format),
},
Some(underlying) => {
write!(fmt, "Format error decoding {}: {}", self.format, underlying)
}
}
}
}
impl Error for DecodingError {
fn source(&self) -> Option<&(dyn Error + 'static)> {
match &self.underlying {
None => None,
Some(source) => Some(&**source),
}
}
}
impl fmt::Display for LimitError {
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
match self.kind {
LimitErrorKind::InsufficientMemory => write!(fmt, "Insufficient memory"),
LimitErrorKind::DimensionError => write!(fmt, "Image is too large"),
LimitErrorKind::Unsupported { .. } => {
write!(fmt, "The following strict limits are specified but not supported by the opertation: ")?;
Ok(())
}
}
}
}
impl Error for LimitError {}
impl fmt::Display for ImageFormatHint {
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
match self {
ImageFormatHint::Exact(format) => write!(fmt, "{:?}", format),
ImageFormatHint::Name(name) => write!(fmt, "`{}`", name),
ImageFormatHint::PathExtension(ext) => write!(fmt, "`.{:?}`", ext),
ImageFormatHint::Unknown => write!(fmt, "`Unknown`"),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::mem;
#[allow(dead_code)]
// This will fail to compile if the size of this type is large.
const ASSERT_SMALLISH: usize = [0][(mem::size_of::<ImageError>() >= 200) as usize];
#[test]
fn test_send_sync_stability() {
fn assert_send_sync<T: Send + Sync>() {}
assert_send_sync::<ImageError>();
}
}

1735
vendor/image/src/flat.rs vendored Normal file

File diff suppressed because it is too large Load Diff

1915
vendor/image/src/image.rs vendored Normal file

File diff suppressed because it is too large Load Diff

410
vendor/image/src/imageops/affine.rs vendored Normal file
View File

@@ -0,0 +1,410 @@
//! Functions for performing affine transformations.
use crate::error::{ImageError, ParameterError, ParameterErrorKind};
use crate::image::{GenericImage, GenericImageView};
use crate::traits::Pixel;
use crate::ImageBuffer;
/// Rotate an image 90 degrees clockwise.
pub fn rotate90<I: GenericImageView>(
image: &I,
) -> ImageBuffer<I::Pixel, Vec<<I::Pixel as Pixel>::Subpixel>>
where
I::Pixel: 'static,
{
let (width, height) = image.dimensions();
let mut out = ImageBuffer::new(height, width);
let _ = rotate90_in(image, &mut out);
out
}
/// Rotate an image 180 degrees clockwise.
pub fn rotate180<I: GenericImageView>(
image: &I,
) -> ImageBuffer<I::Pixel, Vec<<I::Pixel as Pixel>::Subpixel>>
where
I::Pixel: 'static,
{
let (width, height) = image.dimensions();
let mut out = ImageBuffer::new(width, height);
let _ = rotate180_in(image, &mut out);
out
}
/// Rotate an image 270 degrees clockwise.
pub fn rotate270<I: GenericImageView>(
image: &I,
) -> ImageBuffer<I::Pixel, Vec<<I::Pixel as Pixel>::Subpixel>>
where
I::Pixel: 'static,
{
let (width, height) = image.dimensions();
let mut out = ImageBuffer::new(height, width);
let _ = rotate270_in(image, &mut out);
out
}
/// Rotate an image 90 degrees clockwise and put the result into the destination [`ImageBuffer`].
pub fn rotate90_in<I, Container>(
image: &I,
destination: &mut ImageBuffer<I::Pixel, Container>,
) -> crate::ImageResult<()>
where
I: GenericImageView,
I::Pixel: 'static,
Container: std::ops::DerefMut<Target = [<I::Pixel as Pixel>::Subpixel]>,
{
let ((w0, h0), (w1, h1)) = (image.dimensions(), destination.dimensions());
if w0 != h1 || h0 != w1 {
return Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::DimensionMismatch,
)));
}
for y in 0..h0 {
for x in 0..w0 {
let p = image.get_pixel(x, y);
destination.put_pixel(h0 - y - 1, x, p);
}
}
Ok(())
}
/// Rotate an image 180 degrees clockwise and put the result into the destination [`ImageBuffer`].
pub fn rotate180_in<I, Container>(
image: &I,
destination: &mut ImageBuffer<I::Pixel, Container>,
) -> crate::ImageResult<()>
where
I: GenericImageView,
I::Pixel: 'static,
Container: std::ops::DerefMut<Target = [<I::Pixel as Pixel>::Subpixel]>,
{
let ((w0, h0), (w1, h1)) = (image.dimensions(), destination.dimensions());
if w0 != w1 || h0 != h1 {
return Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::DimensionMismatch,
)));
}
for y in 0..h0 {
for x in 0..w0 {
let p = image.get_pixel(x, y);
destination.put_pixel(w0 - x - 1, h0 - y - 1, p);
}
}
Ok(())
}
/// Rotate an image 270 degrees clockwise and put the result into the destination [`ImageBuffer`].
pub fn rotate270_in<I, Container>(
image: &I,
destination: &mut ImageBuffer<I::Pixel, Container>,
) -> crate::ImageResult<()>
where
I: GenericImageView,
I::Pixel: 'static,
Container: std::ops::DerefMut<Target = [<I::Pixel as Pixel>::Subpixel]>,
{
let ((w0, h0), (w1, h1)) = (image.dimensions(), destination.dimensions());
if w0 != h1 || h0 != w1 {
return Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::DimensionMismatch,
)));
}
for y in 0..h0 {
for x in 0..w0 {
let p = image.get_pixel(x, y);
destination.put_pixel(y, w0 - x - 1, p);
}
}
Ok(())
}
/// Flip an image horizontally
pub fn flip_horizontal<I: GenericImageView>(
image: &I,
) -> ImageBuffer<I::Pixel, Vec<<I::Pixel as Pixel>::Subpixel>>
where
I::Pixel: 'static,
{
let (width, height) = image.dimensions();
let mut out = ImageBuffer::new(width, height);
let _ = flip_horizontal_in(image, &mut out);
out
}
/// Flip an image vertically
pub fn flip_vertical<I: GenericImageView>(
image: &I,
) -> ImageBuffer<I::Pixel, Vec<<I::Pixel as Pixel>::Subpixel>>
where
I::Pixel: 'static,
{
let (width, height) = image.dimensions();
let mut out = ImageBuffer::new(width, height);
let _ = flip_vertical_in(image, &mut out);
out
}
/// Flip an image horizontally and put the result into the destination [`ImageBuffer`].
pub fn flip_horizontal_in<I, Container>(
image: &I,
destination: &mut ImageBuffer<I::Pixel, Container>,
) -> crate::ImageResult<()>
where
I: GenericImageView,
I::Pixel: 'static,
Container: std::ops::DerefMut<Target = [<I::Pixel as Pixel>::Subpixel]>,
{
let ((w0, h0), (w1, h1)) = (image.dimensions(), destination.dimensions());
if w0 != w1 || h0 != h1 {
return Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::DimensionMismatch,
)));
}
for y in 0..h0 {
for x in 0..w0 {
let p = image.get_pixel(x, y);
destination.put_pixel(w0 - x - 1, y, p);
}
}
Ok(())
}
/// Flip an image vertically and put the result into the destination [`ImageBuffer`].
pub fn flip_vertical_in<I, Container>(
image: &I,
destination: &mut ImageBuffer<I::Pixel, Container>,
) -> crate::ImageResult<()>
where
I: GenericImageView,
I::Pixel: 'static,
Container: std::ops::DerefMut<Target = [<I::Pixel as Pixel>::Subpixel]>,
{
let ((w0, h0), (w1, h1)) = (image.dimensions(), destination.dimensions());
if w0 != w1 || h0 != h1 {
return Err(ImageError::Parameter(ParameterError::from_kind(
ParameterErrorKind::DimensionMismatch,
)));
}
for y in 0..h0 {
for x in 0..w0 {
let p = image.get_pixel(x, y);
destination.put_pixel(x, h0 - 1 - y, p);
}
}
Ok(())
}
/// Rotate an image 180 degrees clockwise in place.
pub fn rotate180_in_place<I: GenericImage>(image: &mut I) {
let (width, height) = image.dimensions();
for y in 0..height / 2 {
for x in 0..width {
let p = image.get_pixel(x, y);
let x2 = width - x - 1;
let y2 = height - y - 1;
let p2 = image.get_pixel(x2, y2);
image.put_pixel(x, y, p2);
image.put_pixel(x2, y2, p);
}
}
if height % 2 != 0 {
let middle = height / 2;
for x in 0..width / 2 {
let p = image.get_pixel(x, middle);
let x2 = width - x - 1;
let p2 = image.get_pixel(x2, middle);
image.put_pixel(x, middle, p2);
image.put_pixel(x2, middle, p);
}
}
}
/// Flip an image horizontally in place.
pub fn flip_horizontal_in_place<I: GenericImage>(image: &mut I) {
let (width, height) = image.dimensions();
for y in 0..height {
for x in 0..width / 2 {
let x2 = width - x - 1;
let p2 = image.get_pixel(x2, y);
let p = image.get_pixel(x, y);
image.put_pixel(x2, y, p);
image.put_pixel(x, y, p2);
}
}
}
/// Flip an image vertically in place.
pub fn flip_vertical_in_place<I: GenericImage>(image: &mut I) {
let (width, height) = image.dimensions();
for y in 0..height / 2 {
for x in 0..width {
let y2 = height - y - 1;
let p2 = image.get_pixel(x, y2);
let p = image.get_pixel(x, y);
image.put_pixel(x, y2, p);
image.put_pixel(x, y, p2);
}
}
}
#[cfg(test)]
mod test {
use super::{
flip_horizontal, flip_horizontal_in_place, flip_vertical, flip_vertical_in_place,
rotate180, rotate180_in_place, rotate270, rotate90,
};
use crate::image::GenericImage;
use crate::traits::Pixel;
use crate::{GrayImage, ImageBuffer};
macro_rules! assert_pixels_eq {
($actual:expr, $expected:expr) => {{
let actual_dim = $actual.dimensions();
let expected_dim = $expected.dimensions();
if actual_dim != expected_dim {
panic!(
"dimensions do not match. \
actual: {:?}, expected: {:?}",
actual_dim, expected_dim
)
}
let diffs = pixel_diffs($actual, $expected);
if !diffs.is_empty() {
let mut err = "".to_string();
let diff_messages = diffs
.iter()
.take(5)
.map(|d| format!("\nactual: {:?}, expected {:?} ", d.0, d.1))
.collect::<Vec<_>>()
.join("");
err.push_str(&diff_messages);
panic!("pixels do not match. {:?}", err)
}
}};
}
#[test]
fn test_rotate90() {
let image: GrayImage =
ImageBuffer::from_raw(3, 2, vec![00u8, 01u8, 02u8, 10u8, 11u8, 12u8]).unwrap();
let expected: GrayImage =
ImageBuffer::from_raw(2, 3, vec![10u8, 00u8, 11u8, 01u8, 12u8, 02u8]).unwrap();
assert_pixels_eq!(&rotate90(&image), &expected);
}
#[test]
fn test_rotate180() {
let image: GrayImage =
ImageBuffer::from_raw(3, 2, vec![00u8, 01u8, 02u8, 10u8, 11u8, 12u8]).unwrap();
let expected: GrayImage =
ImageBuffer::from_raw(3, 2, vec![12u8, 11u8, 10u8, 02u8, 01u8, 00u8]).unwrap();
assert_pixels_eq!(&rotate180(&image), &expected);
}
#[test]
fn test_rotate270() {
let image: GrayImage =
ImageBuffer::from_raw(3, 2, vec![00u8, 01u8, 02u8, 10u8, 11u8, 12u8]).unwrap();
let expected: GrayImage =
ImageBuffer::from_raw(2, 3, vec![02u8, 12u8, 01u8, 11u8, 00u8, 10u8]).unwrap();
assert_pixels_eq!(&rotate270(&image), &expected);
}
#[test]
fn test_rotate180_in_place() {
let mut image: GrayImage =
ImageBuffer::from_raw(3, 2, vec![00u8, 01u8, 02u8, 10u8, 11u8, 12u8]).unwrap();
let expected: GrayImage =
ImageBuffer::from_raw(3, 2, vec![12u8, 11u8, 10u8, 02u8, 01u8, 00u8]).unwrap();
rotate180_in_place(&mut image);
assert_pixels_eq!(&image, &expected);
}
#[test]
fn test_flip_horizontal() {
let image: GrayImage =
ImageBuffer::from_raw(3, 2, vec![00u8, 01u8, 02u8, 10u8, 11u8, 12u8]).unwrap();
let expected: GrayImage =
ImageBuffer::from_raw(3, 2, vec![02u8, 01u8, 00u8, 12u8, 11u8, 10u8]).unwrap();
assert_pixels_eq!(&flip_horizontal(&image), &expected);
}
#[test]
fn test_flip_vertical() {
let image: GrayImage =
ImageBuffer::from_raw(3, 2, vec![00u8, 01u8, 02u8, 10u8, 11u8, 12u8]).unwrap();
let expected: GrayImage =
ImageBuffer::from_raw(3, 2, vec![10u8, 11u8, 12u8, 00u8, 01u8, 02u8]).unwrap();
assert_pixels_eq!(&flip_vertical(&image), &expected);
}
#[test]
fn test_flip_horizontal_in_place() {
let mut image: GrayImage =
ImageBuffer::from_raw(3, 2, vec![00u8, 01u8, 02u8, 10u8, 11u8, 12u8]).unwrap();
let expected: GrayImage =
ImageBuffer::from_raw(3, 2, vec![02u8, 01u8, 00u8, 12u8, 11u8, 10u8]).unwrap();
flip_horizontal_in_place(&mut image);
assert_pixels_eq!(&image, &expected);
}
#[test]
fn test_flip_vertical_in_place() {
let mut image: GrayImage =
ImageBuffer::from_raw(3, 2, vec![00u8, 01u8, 02u8, 10u8, 11u8, 12u8]).unwrap();
let expected: GrayImage =
ImageBuffer::from_raw(3, 2, vec![10u8, 11u8, 12u8, 00u8, 01u8, 02u8]).unwrap();
flip_vertical_in_place(&mut image);
assert_pixels_eq!(&image, &expected);
}
fn pixel_diffs<I, J, P>(left: &I, right: &J) -> Vec<((u32, u32, P), (u32, u32, P))>
where
I: GenericImage<Pixel = P>,
J: GenericImage<Pixel = P>,
P: Pixel + Eq,
{
left.pixels()
.zip(right.pixels())
.filter(|&(p, q)| p != q)
.collect::<Vec<_>>()
}
}

646
vendor/image/src/imageops/colorops.rs vendored Normal file
View File

@@ -0,0 +1,646 @@
//! Functions for altering and converting the color of pixelbufs
use num_traits::NumCast;
use std::f64::consts::PI;
use crate::color::{FromColor, IntoColor, Luma, LumaA, Rgba};
use crate::image::{GenericImage, GenericImageView};
use crate::traits::{Pixel, Primitive};
use crate::utils::clamp;
use crate::ImageBuffer;
type Subpixel<I> = <<I as GenericImageView>::Pixel as Pixel>::Subpixel;
/// Convert the supplied image to grayscale. Alpha channel is discarded.
pub fn grayscale<I: GenericImageView>(
image: &I,
) -> ImageBuffer<Luma<Subpixel<I>>, Vec<Subpixel<I>>> {
grayscale_with_type(image)
}
/// Convert the supplied image to grayscale. Alpha channel is preserved.
pub fn grayscale_alpha<I: GenericImageView>(
image: &I,
) -> ImageBuffer<LumaA<Subpixel<I>>, Vec<Subpixel<I>>> {
grayscale_with_type_alpha(image)
}
/// Convert the supplied image to a grayscale image with the specified pixel type. Alpha channel is discarded.
pub fn grayscale_with_type<NewPixel, I: GenericImageView>(
image: &I,
) -> ImageBuffer<NewPixel, Vec<NewPixel::Subpixel>>
where
NewPixel: Pixel + FromColor<Luma<Subpixel<I>>>,
{
let (width, height) = image.dimensions();
let mut out = ImageBuffer::new(width, height);
for (x, y, pixel) in image.pixels() {
let grayscale = pixel.to_luma();
let new_pixel = grayscale.into_color(); // no-op for luma->luma
out.put_pixel(x, y, new_pixel);
}
out
}
/// Convert the supplied image to a grayscale image with the specified pixel type. Alpha channel is preserved.
pub fn grayscale_with_type_alpha<NewPixel, I: GenericImageView>(
image: &I,
) -> ImageBuffer<NewPixel, Vec<NewPixel::Subpixel>>
where
NewPixel: Pixel + FromColor<LumaA<Subpixel<I>>>,
{
let (width, height) = image.dimensions();
let mut out = ImageBuffer::new(width, height);
for (x, y, pixel) in image.pixels() {
let grayscale = pixel.to_luma_alpha();
let new_pixel = grayscale.into_color(); // no-op for luma->luma
out.put_pixel(x, y, new_pixel);
}
out
}
/// Invert each pixel within the supplied image.
/// This function operates in place.
pub fn invert<I: GenericImage>(image: &mut I) {
// TODO find a way to use pixels?
let (width, height) = image.dimensions();
for y in 0..height {
for x in 0..width {
let mut p = image.get_pixel(x, y);
p.invert();
image.put_pixel(x, y, p);
}
}
}
/// Adjust the contrast of the supplied image.
/// ```contrast``` is the amount to adjust the contrast by.
/// Negative values decrease the contrast and positive values increase the contrast.
///
/// *[See also `contrast_in_place`.][contrast_in_place]*
pub fn contrast<I, P, S>(image: &I, contrast: f32) -> ImageBuffer<P, Vec<S>>
where
I: GenericImageView<Pixel = P>,
P: Pixel<Subpixel = S> + 'static,
S: Primitive + 'static,
{
let (width, height) = image.dimensions();
let mut out = ImageBuffer::new(width, height);
let max = S::DEFAULT_MAX_VALUE;
let max: f32 = NumCast::from(max).unwrap();
let percent = ((100.0 + contrast) / 100.0).powi(2);
for (x, y, pixel) in image.pixels() {
let f = pixel.map(|b| {
let c: f32 = NumCast::from(b).unwrap();
let d = ((c / max - 0.5) * percent + 0.5) * max;
let e = clamp(d, 0.0, max);
NumCast::from(e).unwrap()
});
out.put_pixel(x, y, f);
}
out
}
/// Adjust the contrast of the supplied image in place.
/// ```contrast``` is the amount to adjust the contrast by.
/// Negative values decrease the contrast and positive values increase the contrast.
///
/// *[See also `contrast`.][contrast]*
pub fn contrast_in_place<I>(image: &mut I, contrast: f32)
where
I: GenericImage,
{
let (width, height) = image.dimensions();
let max = <I::Pixel as Pixel>::Subpixel::DEFAULT_MAX_VALUE;
let max: f32 = NumCast::from(max).unwrap();
let percent = ((100.0 + contrast) / 100.0).powi(2);
// TODO find a way to use pixels?
for y in 0..height {
for x in 0..width {
let f = image.get_pixel(x, y).map(|b| {
let c: f32 = NumCast::from(b).unwrap();
let d = ((c / max - 0.5) * percent + 0.5) * max;
let e = clamp(d, 0.0, max);
NumCast::from(e).unwrap()
});
image.put_pixel(x, y, f);
}
}
}
/// Brighten the supplied image.
/// ```value``` is the amount to brighten each pixel by.
/// Negative values decrease the brightness and positive values increase it.
///
/// *[See also `brighten_in_place`.][brighten_in_place]*
pub fn brighten<I, P, S>(image: &I, value: i32) -> ImageBuffer<P, Vec<S>>
where
I: GenericImageView<Pixel = P>,
P: Pixel<Subpixel = S> + 'static,
S: Primitive + 'static,
{
let (width, height) = image.dimensions();
let mut out = ImageBuffer::new(width, height);
let max = S::DEFAULT_MAX_VALUE;
let max: i32 = NumCast::from(max).unwrap();
for (x, y, pixel) in image.pixels() {
let e = pixel.map_with_alpha(
|b| {
let c: i32 = NumCast::from(b).unwrap();
let d = clamp(c + value, 0, max);
NumCast::from(d).unwrap()
},
|alpha| alpha,
);
out.put_pixel(x, y, e);
}
out
}
/// Brighten the supplied image in place.
/// ```value``` is the amount to brighten each pixel by.
/// Negative values decrease the brightness and positive values increase it.
///
/// *[See also `brighten`.][brighten]*
pub fn brighten_in_place<I>(image: &mut I, value: i32)
where
I: GenericImage,
{
let (width, height) = image.dimensions();
let max = <I::Pixel as Pixel>::Subpixel::DEFAULT_MAX_VALUE;
let max: i32 = NumCast::from(max).unwrap(); // TODO what does this do for f32? clamp at 1??
// TODO find a way to use pixels?
for y in 0..height {
for x in 0..width {
let e = image.get_pixel(x, y).map_with_alpha(
|b| {
let c: i32 = NumCast::from(b).unwrap();
let d = clamp(c + value, 0, max);
NumCast::from(d).unwrap()
},
|alpha| alpha,
);
image.put_pixel(x, y, e);
}
}
}
/// Hue rotate the supplied image.
/// `value` is the degrees to rotate each pixel by.
/// 0 and 360 do nothing, the rest rotates by the given degree value.
/// just like the css webkit filter hue-rotate(180)
///
/// *[See also `huerotate_in_place`.][huerotate_in_place]*
pub fn huerotate<I, P, S>(image: &I, value: i32) -> ImageBuffer<P, Vec<S>>
where
I: GenericImageView<Pixel = P>,
P: Pixel<Subpixel = S> + 'static,
S: Primitive + 'static,
{
let (width, height) = image.dimensions();
let mut out = ImageBuffer::new(width, height);
let angle: f64 = NumCast::from(value).unwrap();
let cosv = (angle * PI / 180.0).cos();
let sinv = (angle * PI / 180.0).sin();
let matrix: [f64; 9] = [
// Reds
0.213 + cosv * 0.787 - sinv * 0.213,
0.715 - cosv * 0.715 - sinv * 0.715,
0.072 - cosv * 0.072 + sinv * 0.928,
// Greens
0.213 - cosv * 0.213 + sinv * 0.143,
0.715 + cosv * 0.285 + sinv * 0.140,
0.072 - cosv * 0.072 - sinv * 0.283,
// Blues
0.213 - cosv * 0.213 - sinv * 0.787,
0.715 - cosv * 0.715 + sinv * 0.715,
0.072 + cosv * 0.928 + sinv * 0.072,
];
for (x, y, pixel) in out.enumerate_pixels_mut() {
let p = image.get_pixel(x, y);
#[allow(deprecated)]
let (k1, k2, k3, k4) = p.channels4();
let vec: (f64, f64, f64, f64) = (
NumCast::from(k1).unwrap(),
NumCast::from(k2).unwrap(),
NumCast::from(k3).unwrap(),
NumCast::from(k4).unwrap(),
);
let r = vec.0;
let g = vec.1;
let b = vec.2;
let new_r = matrix[0] * r + matrix[1] * g + matrix[2] * b;
let new_g = matrix[3] * r + matrix[4] * g + matrix[5] * b;
let new_b = matrix[6] * r + matrix[7] * g + matrix[8] * b;
let max = 255f64;
#[allow(deprecated)]
let outpixel = Pixel::from_channels(
NumCast::from(clamp(new_r, 0.0, max)).unwrap(),
NumCast::from(clamp(new_g, 0.0, max)).unwrap(),
NumCast::from(clamp(new_b, 0.0, max)).unwrap(),
NumCast::from(clamp(vec.3, 0.0, max)).unwrap(),
);
*pixel = outpixel;
}
out
}
/// Hue rotate the supplied image in place.
/// `value` is the degrees to rotate each pixel by.
/// 0 and 360 do nothing, the rest rotates by the given degree value.
/// just like the css webkit filter hue-rotate(180)
///
/// *[See also `huerotate`.][huerotate]*
pub fn huerotate_in_place<I>(image: &mut I, value: i32)
where
I: GenericImage,
{
let (width, height) = image.dimensions();
let angle: f64 = NumCast::from(value).unwrap();
let cosv = (angle * PI / 180.0).cos();
let sinv = (angle * PI / 180.0).sin();
let matrix: [f64; 9] = [
// Reds
0.213 + cosv * 0.787 - sinv * 0.213,
0.715 - cosv * 0.715 - sinv * 0.715,
0.072 - cosv * 0.072 + sinv * 0.928,
// Greens
0.213 - cosv * 0.213 + sinv * 0.143,
0.715 + cosv * 0.285 + sinv * 0.140,
0.072 - cosv * 0.072 - sinv * 0.283,
// Blues
0.213 - cosv * 0.213 - sinv * 0.787,
0.715 - cosv * 0.715 + sinv * 0.715,
0.072 + cosv * 0.928 + sinv * 0.072,
];
// TODO find a way to use pixels?
for y in 0..height {
for x in 0..width {
let pixel = image.get_pixel(x, y);
#[allow(deprecated)]
let (k1, k2, k3, k4) = pixel.channels4();
let vec: (f64, f64, f64, f64) = (
NumCast::from(k1).unwrap(),
NumCast::from(k2).unwrap(),
NumCast::from(k3).unwrap(),
NumCast::from(k4).unwrap(),
);
let r = vec.0;
let g = vec.1;
let b = vec.2;
let new_r = matrix[0] * r + matrix[1] * g + matrix[2] * b;
let new_g = matrix[3] * r + matrix[4] * g + matrix[5] * b;
let new_b = matrix[6] * r + matrix[7] * g + matrix[8] * b;
let max = 255f64;
#[allow(deprecated)]
let outpixel = Pixel::from_channels(
NumCast::from(clamp(new_r, 0.0, max)).unwrap(),
NumCast::from(clamp(new_g, 0.0, max)).unwrap(),
NumCast::from(clamp(new_b, 0.0, max)).unwrap(),
NumCast::from(clamp(vec.3, 0.0, max)).unwrap(),
);
image.put_pixel(x, y, outpixel);
}
}
}
/// A color map
pub trait ColorMap {
/// The color type on which the map operates on
type Color;
/// Returns the index of the closest match of `color`
/// in the color map.
fn index_of(&self, color: &Self::Color) -> usize;
/// Looks up color by index in the color map. If `idx` is out of range for the color map, or
/// ColorMap doesn't implement `lookup` `None` is returned.
fn lookup(&self, index: usize) -> Option<Self::Color> {
let _ = index;
None
}
/// Determine if this implementation of ColorMap overrides the default `lookup`.
fn has_lookup(&self) -> bool {
false
}
/// Maps `color` to the closest color in the color map.
fn map_color(&self, color: &mut Self::Color);
}
/// A bi-level color map
///
/// # Examples
/// ```
/// use image::imageops::colorops::{index_colors, BiLevel, ColorMap};
/// use image::{ImageBuffer, Luma};
///
/// let (w, h) = (16, 16);
/// // Create an image with a smooth horizontal gradient from black (0) to white (255).
/// let gray = ImageBuffer::from_fn(w, h, |x, y| -> Luma<u8> { [(255 * x / w) as u8].into() });
/// // Mapping the gray image through the `BiLevel` filter should map gray pixels less than half
/// // intensity (127) to black (0), and anything greater to white (255).
/// let cmap = BiLevel;
/// let palletized = index_colors(&gray, &cmap);
/// let mapped = ImageBuffer::from_fn(w, h, |x, y| {
/// let p = palletized.get_pixel(x, y);
/// cmap.lookup(p.0[0] as usize)
/// .expect("indexed color out-of-range")
/// });
/// // Create an black and white image of expected output.
/// let bw = ImageBuffer::from_fn(w, h, |x, y| -> Luma<u8> {
/// if x <= (w / 2) {
/// [0].into()
/// } else {
/// [255].into()
/// }
/// });
/// assert_eq!(mapped, bw);
/// ```
#[derive(Clone, Copy)]
pub struct BiLevel;
impl ColorMap for BiLevel {
type Color = Luma<u8>;
#[inline(always)]
fn index_of(&self, color: &Luma<u8>) -> usize {
let luma = color.0;
if luma[0] > 127 {
1
} else {
0
}
}
#[inline(always)]
fn lookup(&self, idx: usize) -> Option<Self::Color> {
match idx {
0 => Some([0].into()),
1 => Some([255].into()),
_ => None,
}
}
/// Indicate NeuQuant implements `lookup`.
fn has_lookup(&self) -> bool {
true
}
#[inline(always)]
fn map_color(&self, color: &mut Luma<u8>) {
let new_color = 0xFF * self.index_of(color) as u8;
let luma = &mut color.0;
luma[0] = new_color;
}
}
impl ColorMap for color_quant::NeuQuant {
type Color = Rgba<u8>;
#[inline(always)]
fn index_of(&self, color: &Rgba<u8>) -> usize {
self.index_of(color.channels())
}
#[inline(always)]
fn lookup(&self, idx: usize) -> Option<Self::Color> {
self.lookup(idx).map(|p| p.into())
}
/// Indicate NeuQuant implements `lookup`.
fn has_lookup(&self) -> bool {
true
}
#[inline(always)]
fn map_color(&self, color: &mut Rgba<u8>) {
self.map_pixel(color.channels_mut())
}
}
/// Floyd-Steinberg error diffusion
fn diffuse_err<P: Pixel<Subpixel = u8>>(pixel: &mut P, error: [i16; 3], factor: i16) {
for (e, c) in error.iter().zip(pixel.channels_mut().iter_mut()) {
*c = match <i16 as From<_>>::from(*c) + e * factor / 16 {
val if val < 0 => 0,
val if val > 0xFF => 0xFF,
val => val as u8,
}
}
}
macro_rules! do_dithering(
($map:expr, $image:expr, $err:expr, $x:expr, $y:expr) => (
{
let old_pixel = $image[($x, $y)];
let new_pixel = $image.get_pixel_mut($x, $y);
$map.map_color(new_pixel);
for ((e, &old), &new) in $err.iter_mut()
.zip(old_pixel.channels().iter())
.zip(new_pixel.channels().iter())
{
*e = <i16 as From<_>>::from(old) - <i16 as From<_>>::from(new)
}
}
)
);
/// Reduces the colors of the image using the supplied `color_map` while applying
/// Floyd-Steinberg dithering to improve the visual conception
pub fn dither<Pix, Map>(image: &mut ImageBuffer<Pix, Vec<u8>>, color_map: &Map)
where
Map: ColorMap<Color = Pix> + ?Sized,
Pix: Pixel<Subpixel = u8> + 'static,
{
let (width, height) = image.dimensions();
let mut err: [i16; 3] = [0; 3];
for y in 0..height - 1 {
let x = 0;
do_dithering!(color_map, image, err, x, y);
diffuse_err(image.get_pixel_mut(x + 1, y), err, 7);
diffuse_err(image.get_pixel_mut(x, y + 1), err, 5);
diffuse_err(image.get_pixel_mut(x + 1, y + 1), err, 1);
for x in 1..width - 1 {
do_dithering!(color_map, image, err, x, y);
diffuse_err(image.get_pixel_mut(x + 1, y), err, 7);
diffuse_err(image.get_pixel_mut(x - 1, y + 1), err, 3);
diffuse_err(image.get_pixel_mut(x, y + 1), err, 5);
diffuse_err(image.get_pixel_mut(x + 1, y + 1), err, 1);
}
let x = width - 1;
do_dithering!(color_map, image, err, x, y);
diffuse_err(image.get_pixel_mut(x - 1, y + 1), err, 3);
diffuse_err(image.get_pixel_mut(x, y + 1), err, 5);
}
let y = height - 1;
let x = 0;
do_dithering!(color_map, image, err, x, y);
diffuse_err(image.get_pixel_mut(x + 1, y), err, 7);
for x in 1..width - 1 {
do_dithering!(color_map, image, err, x, y);
diffuse_err(image.get_pixel_mut(x + 1, y), err, 7);
}
let x = width - 1;
do_dithering!(color_map, image, err, x, y);
}
/// Reduces the colors using the supplied `color_map` and returns an image of the indices
pub fn index_colors<Pix, Map>(
image: &ImageBuffer<Pix, Vec<u8>>,
color_map: &Map,
) -> ImageBuffer<Luma<u8>, Vec<u8>>
where
Map: ColorMap<Color = Pix> + ?Sized,
Pix: Pixel<Subpixel = u8> + 'static,
{
let mut indices = ImageBuffer::new(image.width(), image.height());
for (pixel, idx) in image.pixels().zip(indices.pixels_mut()) {
*idx = Luma([color_map.index_of(pixel) as u8])
}
indices
}
#[cfg(test)]
mod test {
use super::*;
use crate::{GrayImage, ImageBuffer};
macro_rules! assert_pixels_eq {
($actual:expr, $expected:expr) => {{
let actual_dim = $actual.dimensions();
let expected_dim = $expected.dimensions();
if actual_dim != expected_dim {
panic!(
"dimensions do not match. \
actual: {:?}, expected: {:?}",
actual_dim, expected_dim
)
}
let diffs = pixel_diffs($actual, $expected);
if !diffs.is_empty() {
let mut err = "".to_string();
let diff_messages = diffs
.iter()
.take(5)
.map(|d| format!("\nactual: {:?}, expected {:?} ", d.0, d.1))
.collect::<Vec<_>>()
.join("");
err.push_str(&diff_messages);
panic!("pixels do not match. {:?}", err)
}
}};
}
#[test]
fn test_dither() {
let mut image = ImageBuffer::from_raw(2, 2, vec![127, 127, 127, 127]).unwrap();
let cmap = BiLevel;
dither(&mut image, &cmap);
assert_eq!(&*image, &[0, 0xFF, 0xFF, 0]);
assert_eq!(index_colors(&image, &cmap).into_raw(), vec![0, 1, 1, 0])
}
#[test]
fn test_grayscale() {
let mut image: GrayImage =
ImageBuffer::from_raw(3, 2, vec![00u8, 01u8, 02u8, 10u8, 11u8, 12u8]).unwrap();
let expected: GrayImage =
ImageBuffer::from_raw(3, 2, vec![00u8, 01u8, 02u8, 10u8, 11u8, 12u8]).unwrap();
assert_pixels_eq!(&grayscale(&mut image), &expected);
}
#[test]
fn test_invert() {
let mut image: GrayImage =
ImageBuffer::from_raw(3, 2, vec![00u8, 01u8, 02u8, 10u8, 11u8, 12u8]).unwrap();
let expected: GrayImage =
ImageBuffer::from_raw(3, 2, vec![255u8, 254u8, 253u8, 245u8, 244u8, 243u8]).unwrap();
invert(&mut image);
assert_pixels_eq!(&image, &expected);
}
#[test]
fn test_brighten() {
let image: GrayImage =
ImageBuffer::from_raw(3, 2, vec![00u8, 01u8, 02u8, 10u8, 11u8, 12u8]).unwrap();
let expected: GrayImage =
ImageBuffer::from_raw(3, 2, vec![10u8, 11u8, 12u8, 20u8, 21u8, 22u8]).unwrap();
assert_pixels_eq!(&brighten(&image, 10), &expected);
}
#[test]
fn test_brighten_place() {
let mut image: GrayImage =
ImageBuffer::from_raw(3, 2, vec![00u8, 01u8, 02u8, 10u8, 11u8, 12u8]).unwrap();
let expected: GrayImage =
ImageBuffer::from_raw(3, 2, vec![10u8, 11u8, 12u8, 20u8, 21u8, 22u8]).unwrap();
brighten_in_place(&mut image, 10);
assert_pixels_eq!(&image, &expected);
}
fn pixel_diffs<I, J, P>(left: &I, right: &J) -> Vec<((u32, u32, P), (u32, u32, P))>
where
I: GenericImage<Pixel = P>,
J: GenericImage<Pixel = P>,
P: Pixel + Eq,
{
left.pixels()
.zip(right.pixels())
.filter(|&(p, q)| p != q)
.collect::<Vec<_>>()
}
}

485
vendor/image/src/imageops/mod.rs vendored Normal file
View File

@@ -0,0 +1,485 @@
//! Image Processing Functions
use std::cmp;
use crate::image::{GenericImage, GenericImageView, SubImage};
use crate::traits::{Lerp, Pixel, Primitive};
pub use self::sample::FilterType;
pub use self::sample::FilterType::{CatmullRom, Gaussian, Lanczos3, Nearest, Triangle};
/// Affine transformations
pub use self::affine::{
flip_horizontal, flip_horizontal_in, flip_horizontal_in_place, flip_vertical, flip_vertical_in,
flip_vertical_in_place, rotate180, rotate180_in, rotate180_in_place, rotate270, rotate270_in,
rotate90, rotate90_in,
};
/// Image sampling
pub use self::sample::{
blur, filter3x3, interpolate_bilinear, interpolate_nearest, resize, sample_bilinear,
sample_nearest, thumbnail, unsharpen,
};
/// Color operations
pub use self::colorops::{
brighten, contrast, dither, grayscale, grayscale_alpha, grayscale_with_type,
grayscale_with_type_alpha, huerotate, index_colors, invert, BiLevel, ColorMap,
};
mod affine;
// Public only because of Rust bug:
// https://github.com/rust-lang/rust/issues/18241
pub mod colorops;
mod sample;
/// Return a mutable view into an image
/// The coordinates set the position of the top left corner of the crop.
pub fn crop<I: GenericImageView>(
image: &mut I,
x: u32,
y: u32,
width: u32,
height: u32,
) -> SubImage<&mut I> {
let (x, y, width, height) = crop_dimms(image, x, y, width, height);
SubImage::new(image, x, y, width, height)
}
/// Return an immutable view into an image
/// The coordinates set the position of the top left corner of the crop.
pub fn crop_imm<I: GenericImageView>(
image: &I,
x: u32,
y: u32,
width: u32,
height: u32,
) -> SubImage<&I> {
let (x, y, width, height) = crop_dimms(image, x, y, width, height);
SubImage::new(image, x, y, width, height)
}
fn crop_dimms<I: GenericImageView>(
image: &I,
x: u32,
y: u32,
width: u32,
height: u32,
) -> (u32, u32, u32, u32) {
let (iwidth, iheight) = image.dimensions();
let x = cmp::min(x, iwidth);
let y = cmp::min(y, iheight);
let height = cmp::min(height, iheight - y);
let width = cmp::min(width, iwidth - x);
(x, y, width, height)
}
/// Calculate the region that can be copied from top to bottom.
///
/// Given image size of bottom and top image, and a point at which we want to place the top image
/// onto the bottom image, how large can we be? Have to wary of the following issues:
/// * Top might be larger than bottom
/// * Overflows in the computation
/// * Coordinates could be completely out of bounds
///
/// The main idea is to make use of inequalities provided by the nature of `saturating_add` and
/// `saturating_sub`. These intrinsically validate that all resulting coordinates will be in bounds
/// for both images.
///
/// We want that all these coordinate accesses are safe:
/// 1. `bottom.get_pixel(x + [0..x_range), y + [0..y_range))`
/// 2. `top.get_pixel([0..x_range), [0..y_range))`
///
/// Proof that the function provides the necessary bounds for width. Note that all unaugmented math
/// operations are to be read in standard arithmetic, not integer arithmetic. Since no direct
/// integer arithmetic occurs in the implementation, this is unambiguous.
///
/// ```text
/// Three short notes/lemmata:
/// - Iff `(a - b) <= 0` then `a.saturating_sub(b) = 0`
/// - Iff `(a - b) >= 0` then `a.saturating_sub(b) = a - b`
/// - If `a <= c` then `a.saturating_sub(b) <= c.saturating_sub(b)`
///
/// 1.1 We show that if `bottom_width <= x`, then `x_range = 0` therefore `x + [0..x_range)` is empty.
///
/// x_range
/// = (top_width.saturating_add(x).min(bottom_width)).saturating_sub(x)
/// <= bottom_width.saturating_sub(x)
///
/// bottom_width <= x
/// <==> bottom_width - x <= 0
/// <==> bottom_width.saturating_sub(x) = 0
/// ==> x_range <= 0
/// ==> x_range = 0
///
/// 1.2 If `x < bottom_width` then `x + x_range < bottom_width`
///
/// x + x_range
/// <= x + bottom_width.saturating_sub(x)
/// = x + (bottom_width - x)
/// = bottom_width
///
/// 2. We show that `x_range <= top_width`
///
/// x_range
/// = (top_width.saturating_add(x).min(bottom_width)).saturating_sub(x)
/// <= top_width.saturating_add(x).saturating_sub(x)
/// <= (top_wdith + x).saturating_sub(x)
/// = top_width (due to `top_width >= 0` and `x >= 0`)
/// ```
///
/// Proof is the same for height.
pub fn overlay_bounds(
(bottom_width, bottom_height): (u32, u32),
(top_width, top_height): (u32, u32),
x: u32,
y: u32,
) -> (u32, u32) {
let x_range = top_width
.saturating_add(x) // Calculate max coordinate
.min(bottom_width) // Restrict to lower width
.saturating_sub(x); // Determinate length from start `x`
let y_range = top_height
.saturating_add(y)
.min(bottom_height)
.saturating_sub(y);
(x_range, y_range)
}
/// Calculate the region that can be copied from top to bottom.
///
/// Given image size of bottom and top image, and a point at which we want to place the top image
/// onto the bottom image, how large can we be? Have to wary of the following issues:
/// * Top might be larger than bottom
/// * Overflows in the computation
/// * Coordinates could be completely out of bounds
///
/// The returned value is of the form:
///
/// `(origin_bottom_x, origin_bottom_y, origin_top_x, origin_top_y, x_range, y_range)`
///
/// The main idea is to do computations on i64's and then clamp to image dimensions.
/// In particular, we want to ensure that all these coordinate accesses are safe:
/// 1. `bottom.get_pixel(origin_bottom_x + [0..x_range), origin_bottom_y + [0..y_range))`
/// 2. `top.get_pixel(origin_top_y + [0..x_range), origin_top_y + [0..y_range))`
///
fn overlay_bounds_ext(
(bottom_width, bottom_height): (u32, u32),
(top_width, top_height): (u32, u32),
x: i64,
y: i64,
) -> (u32, u32, u32, u32, u32, u32) {
// Return a predictable value if the two images don't overlap at all.
if x > i64::from(bottom_width)
|| y > i64::from(bottom_height)
|| x.saturating_add(i64::from(top_width)) <= 0
|| y.saturating_add(i64::from(top_height)) <= 0
{
return (0, 0, 0, 0, 0, 0);
}
// Find the maximum x and y coordinates in terms of the bottom image.
let max_x = x.saturating_add(i64::from(top_width));
let max_y = y.saturating_add(i64::from(top_height));
// Clip the origin and maximum coordinates to the bounds of the bottom image.
// Casting to a u32 is safe because both 0 and `bottom_{width,height}` fit
// into 32-bits.
let max_inbounds_x = max_x.clamp(0, i64::from(bottom_width)) as u32;
let max_inbounds_y = max_y.clamp(0, i64::from(bottom_height)) as u32;
let origin_bottom_x = x.clamp(0, i64::from(bottom_width)) as u32;
let origin_bottom_y = y.clamp(0, i64::from(bottom_height)) as u32;
// The range is the difference between the maximum inbounds coordinates and
// the clipped origin. Unchecked subtraction is safe here because both are
// always positive and `max_inbounds_{x,y}` >= `origin_{x,y}` due to
// `top_{width,height}` being >= 0.
let x_range = max_inbounds_x - origin_bottom_x;
let y_range = max_inbounds_y - origin_bottom_y;
// If x (or y) is negative, then the origin of the top image is shifted by -x (or -y).
let origin_top_x = x.saturating_mul(-1).clamp(0, i64::from(top_width)) as u32;
let origin_top_y = y.saturating_mul(-1).clamp(0, i64::from(top_height)) as u32;
(
origin_bottom_x,
origin_bottom_y,
origin_top_x,
origin_top_y,
x_range,
y_range,
)
}
/// Overlay an image at a given coordinate (x, y)
pub fn overlay<I, J>(bottom: &mut I, top: &J, x: i64, y: i64)
where
I: GenericImage,
J: GenericImageView<Pixel = I::Pixel>,
{
let bottom_dims = bottom.dimensions();
let top_dims = top.dimensions();
// Crop our top image if we're going out of bounds
let (origin_bottom_x, origin_bottom_y, origin_top_x, origin_top_y, range_width, range_height) =
overlay_bounds_ext(bottom_dims, top_dims, x, y);
for y in 0..range_height {
for x in 0..range_width {
let p = top.get_pixel(origin_top_x + x, origin_top_y + y);
let mut bottom_pixel = bottom.get_pixel(origin_bottom_x + x, origin_bottom_y + y);
bottom_pixel.blend(&p);
bottom.put_pixel(origin_bottom_x + x, origin_bottom_y + y, bottom_pixel);
}
}
}
/// Tile an image by repeating it multiple times
///
/// # Examples
/// ```no_run
/// use image::{RgbaImage};
///
/// let mut img = RgbaImage::new(1920, 1080);
/// let tile = image::open("tile.png").unwrap();
///
/// image::imageops::tile(&mut img, &tile);
/// img.save("tiled_wallpaper.png").unwrap();
/// ```
pub fn tile<I, J>(bottom: &mut I, top: &J)
where
I: GenericImage,
J: GenericImageView<Pixel = I::Pixel>,
{
for x in (0..bottom.width()).step_by(top.width() as usize) {
for y in (0..bottom.height()).step_by(top.height() as usize) {
overlay(bottom, top, i64::from(x), i64::from(y));
}
}
}
/// Fill the image with a linear vertical gradient
///
/// This function assumes a linear color space.
///
/// # Examples
/// ```no_run
/// use image::{Rgba, RgbaImage, Pixel};
///
/// let mut img = RgbaImage::new(100, 100);
/// let start = Rgba::from_slice(&[0, 128, 0, 0]);
/// let end = Rgba::from_slice(&[255, 255, 255, 255]);
///
/// image::imageops::vertical_gradient(&mut img, start, end);
/// img.save("vertical_gradient.png").unwrap();
pub fn vertical_gradient<S, P, I>(img: &mut I, start: &P, stop: &P)
where
I: GenericImage<Pixel = P>,
P: Pixel<Subpixel = S> + 'static,
S: Primitive + Lerp + 'static,
{
for y in 0..img.height() {
let pixel = start.map2(stop, |a, b| {
let y = <S::Ratio as num_traits::NumCast>::from(y).unwrap();
let height = <S::Ratio as num_traits::NumCast>::from(img.height() - 1).unwrap();
S::lerp(a, b, y / height)
});
for x in 0..img.width() {
img.put_pixel(x, y, pixel);
}
}
}
/// Fill the image with a linear horizontal gradient
///
/// This function assumes a linear color space.
///
/// # Examples
/// ```no_run
/// use image::{Rgba, RgbaImage, Pixel};
///
/// let mut img = RgbaImage::new(100, 100);
/// let start = Rgba::from_slice(&[0, 128, 0, 0]);
/// let end = Rgba::from_slice(&[255, 255, 255, 255]);
///
/// image::imageops::horizontal_gradient(&mut img, start, end);
/// img.save("horizontal_gradient.png").unwrap();
pub fn horizontal_gradient<S, P, I>(img: &mut I, start: &P, stop: &P)
where
I: GenericImage<Pixel = P>,
P: Pixel<Subpixel = S> + 'static,
S: Primitive + Lerp + 'static,
{
for x in 0..img.width() {
let pixel = start.map2(stop, |a, b| {
let x = <S::Ratio as num_traits::NumCast>::from(x).unwrap();
let width = <S::Ratio as num_traits::NumCast>::from(img.width() - 1).unwrap();
S::lerp(a, b, x / width)
});
for y in 0..img.height() {
img.put_pixel(x, y, pixel);
}
}
}
/// Replace the contents of an image at a given coordinate (x, y)
pub fn replace<I, J>(bottom: &mut I, top: &J, x: i64, y: i64)
where
I: GenericImage,
J: GenericImageView<Pixel = I::Pixel>,
{
let bottom_dims = bottom.dimensions();
let top_dims = top.dimensions();
// Crop our top image if we're going out of bounds
let (origin_bottom_x, origin_bottom_y, origin_top_x, origin_top_y, range_width, range_height) =
overlay_bounds_ext(bottom_dims, top_dims, x, y);
for y in 0..range_height {
for x in 0..range_width {
let p = top.get_pixel(origin_top_x + x, origin_top_y + y);
bottom.put_pixel(origin_bottom_x + x, origin_bottom_y + y, p);
}
}
}
#[cfg(test)]
mod tests {
use super::{overlay, overlay_bounds_ext};
use crate::color::Rgb;
use crate::ImageBuffer;
use crate::RgbaImage;
#[test]
fn test_overlay_bounds_ext() {
assert_eq!(
overlay_bounds_ext((10, 10), (10, 10), 0, 0),
(0, 0, 0, 0, 10, 10)
);
assert_eq!(
overlay_bounds_ext((10, 10), (10, 10), 1, 0),
(1, 0, 0, 0, 9, 10)
);
assert_eq!(
overlay_bounds_ext((10, 10), (10, 10), 0, 11),
(0, 0, 0, 0, 0, 0)
);
assert_eq!(
overlay_bounds_ext((10, 10), (10, 10), -1, 0),
(0, 0, 1, 0, 9, 10)
);
assert_eq!(
overlay_bounds_ext((10, 10), (10, 10), -10, 0),
(0, 0, 0, 0, 0, 0)
);
assert_eq!(
overlay_bounds_ext((10, 10), (10, 10), 1i64 << 50, 0),
(0, 0, 0, 0, 0, 0)
);
assert_eq!(
overlay_bounds_ext((10, 10), (10, 10), -(1i64 << 50), 0),
(0, 0, 0, 0, 0, 0)
);
assert_eq!(
overlay_bounds_ext((10, 10), (u32::MAX, 10), 10 - i64::from(u32::MAX), 0),
(0, 0, u32::MAX - 10, 0, 10, 10)
);
}
#[test]
/// Test that images written into other images works
fn test_image_in_image() {
let mut target = ImageBuffer::new(32, 32);
let source = ImageBuffer::from_pixel(16, 16, Rgb([255u8, 0, 0]));
overlay(&mut target, &source, 0, 0);
assert!(*target.get_pixel(0, 0) == Rgb([255u8, 0, 0]));
assert!(*target.get_pixel(15, 0) == Rgb([255u8, 0, 0]));
assert!(*target.get_pixel(16, 0) == Rgb([0u8, 0, 0]));
assert!(*target.get_pixel(0, 15) == Rgb([255u8, 0, 0]));
assert!(*target.get_pixel(0, 16) == Rgb([0u8, 0, 0]));
}
#[test]
/// Test that images written outside of a frame doesn't blow up
fn test_image_in_image_outside_of_bounds() {
let mut target = ImageBuffer::new(32, 32);
let source = ImageBuffer::from_pixel(32, 32, Rgb([255u8, 0, 0]));
overlay(&mut target, &source, 1, 1);
assert!(*target.get_pixel(0, 0) == Rgb([0, 0, 0]));
assert!(*target.get_pixel(1, 1) == Rgb([255u8, 0, 0]));
assert!(*target.get_pixel(31, 31) == Rgb([255u8, 0, 0]));
}
#[test]
/// Test that images written to coordinates out of the frame doesn't blow up
/// (issue came up in #848)
fn test_image_outside_image_no_wrap_around() {
let mut target = ImageBuffer::new(32, 32);
let source = ImageBuffer::from_pixel(32, 32, Rgb([255u8, 0, 0]));
overlay(&mut target, &source, 33, 33);
assert!(*target.get_pixel(0, 0) == Rgb([0, 0, 0]));
assert!(*target.get_pixel(1, 1) == Rgb([0, 0, 0]));
assert!(*target.get_pixel(31, 31) == Rgb([0, 0, 0]));
}
#[test]
/// Test that images written to coordinates with overflow works
fn test_image_coordinate_overflow() {
let mut target = ImageBuffer::new(16, 16);
let source = ImageBuffer::from_pixel(32, 32, Rgb([255u8, 0, 0]));
// Overflows to 'sane' coordinates but top is larger than bot.
overlay(
&mut target,
&source,
i64::from(u32::max_value() - 31),
i64::from(u32::max_value() - 31),
);
assert!(*target.get_pixel(0, 0) == Rgb([0, 0, 0]));
assert!(*target.get_pixel(1, 1) == Rgb([0, 0, 0]));
assert!(*target.get_pixel(15, 15) == Rgb([0, 0, 0]));
}
use super::{horizontal_gradient, vertical_gradient};
#[test]
/// Test that horizontal gradients are correctly generated
fn test_image_horizontal_gradient_limits() {
let mut img = ImageBuffer::new(100, 1);
let start = Rgb([0u8, 128, 0]);
let end = Rgb([255u8, 255, 255]);
horizontal_gradient(&mut img, &start, &end);
assert_eq!(img.get_pixel(0, 0), &start);
assert_eq!(img.get_pixel(img.width() - 1, 0), &end);
}
#[test]
/// Test that vertical gradients are correctly generated
fn test_image_vertical_gradient_limits() {
let mut img = ImageBuffer::new(1, 100);
let start = Rgb([0u8, 128, 0]);
let end = Rgb([255u8, 255, 255]);
vertical_gradient(&mut img, &start, &end);
assert_eq!(img.get_pixel(0, 0), &start);
assert_eq!(img.get_pixel(0, img.height() - 1), &end);
}
#[test]
/// Test blur doesn't panick when passed 0.0
fn test_blur_zero() {
let image = RgbaImage::new(50, 50);
let _ = super::blur(&image, 0.0);
}
}

1228
vendor/image/src/imageops/sample.rs vendored Normal file

File diff suppressed because it is too large Load Diff

312
vendor/image/src/io/free_functions.rs vendored Normal file
View File

@@ -0,0 +1,312 @@
use std::fs::File;
use std::io::{BufRead, BufReader, BufWriter, Seek};
use std::path::Path;
use std::u32;
use crate::codecs::*;
use crate::dynimage::DynamicImage;
use crate::error::{ImageError, ImageFormatHint, ImageResult};
use crate::image;
use crate::image::ImageFormat;
#[allow(unused_imports)] // When no features are supported
use crate::image::{ImageDecoder, ImageEncoder};
use crate::{
color,
error::{UnsupportedError, UnsupportedErrorKind},
ImageOutputFormat,
};
pub(crate) fn open_impl(path: &Path) -> ImageResult<DynamicImage> {
let buffered_read = BufReader::new(File::open(path).map_err(ImageError::IoError)?);
load(buffered_read, ImageFormat::from_path(path)?)
}
/// Create a new image from a Reader.
///
/// Assumes the reader is already buffered. For optimal performance,
/// consider wrapping the reader with a `BufReader::new()`.
///
/// Try [`io::Reader`] for more advanced uses.
///
/// [`io::Reader`]: io/struct.Reader.html
#[allow(unused_variables)]
// r is unused if no features are supported.
pub fn load<R: BufRead + Seek>(r: R, format: ImageFormat) -> ImageResult<DynamicImage> {
load_inner(r, super::Limits::default(), format)
}
pub(crate) trait DecoderVisitor {
type Result;
fn visit_decoder<'a, D: ImageDecoder<'a>>(self, decoder: D) -> ImageResult<Self::Result>;
}
pub(crate) fn load_decoder<R: BufRead + Seek, V: DecoderVisitor>(
r: R,
format: ImageFormat,
limits: super::Limits,
visitor: V,
) -> ImageResult<V::Result> {
#[allow(unreachable_patterns)]
// Default is unreachable if all features are supported.
match format {
#[cfg(feature = "avif-decoder")]
image::ImageFormat::Avif => visitor.visit_decoder(avif::AvifDecoder::new(r)?),
#[cfg(feature = "png")]
image::ImageFormat::Png => visitor.visit_decoder(png::PngDecoder::with_limits(r, limits)?),
#[cfg(feature = "gif")]
image::ImageFormat::Gif => visitor.visit_decoder(gif::GifDecoder::new(r)?),
#[cfg(feature = "jpeg")]
image::ImageFormat::Jpeg => visitor.visit_decoder(jpeg::JpegDecoder::new(r)?),
#[cfg(feature = "webp")]
image::ImageFormat::WebP => visitor.visit_decoder(webp::WebPDecoder::new(r)?),
#[cfg(feature = "tiff")]
image::ImageFormat::Tiff => visitor.visit_decoder(tiff::TiffDecoder::new(r)?),
#[cfg(feature = "tga")]
image::ImageFormat::Tga => visitor.visit_decoder(tga::TgaDecoder::new(r)?),
#[cfg(feature = "dds")]
image::ImageFormat::Dds => visitor.visit_decoder(dds::DdsDecoder::new(r)?),
#[cfg(feature = "bmp")]
image::ImageFormat::Bmp => visitor.visit_decoder(bmp::BmpDecoder::new(r)?),
#[cfg(feature = "ico")]
image::ImageFormat::Ico => visitor.visit_decoder(ico::IcoDecoder::new(r)?),
#[cfg(feature = "hdr")]
image::ImageFormat::Hdr => visitor.visit_decoder(hdr::HdrAdapter::new(BufReader::new(r))?),
#[cfg(feature = "exr")]
image::ImageFormat::OpenExr => visitor.visit_decoder(openexr::OpenExrDecoder::new(r)?),
#[cfg(feature = "pnm")]
image::ImageFormat::Pnm => visitor.visit_decoder(pnm::PnmDecoder::new(r)?),
#[cfg(feature = "farbfeld")]
image::ImageFormat::Farbfeld => visitor.visit_decoder(farbfeld::FarbfeldDecoder::new(r)?),
#[cfg(feature = "qoi")]
image::ImageFormat::Qoi => visitor.visit_decoder(qoi::QoiDecoder::new(r)?),
_ => Err(ImageError::Unsupported(
ImageFormatHint::Exact(format).into(),
)),
}
}
pub(crate) fn load_inner<R: BufRead + Seek>(
r: R,
limits: super::Limits,
format: ImageFormat,
) -> ImageResult<DynamicImage> {
struct LoadVisitor(super::Limits);
impl DecoderVisitor for LoadVisitor {
type Result = DynamicImage;
fn visit_decoder<'a, D: ImageDecoder<'a>>(
self,
mut decoder: D,
) -> ImageResult<Self::Result> {
let mut limits = self.0;
// Check that we do not allocate a bigger buffer than we are allowed to
// FIXME: should this rather go in `DynamicImage::from_decoder` somehow?
limits.reserve(decoder.total_bytes())?;
decoder.set_limits(limits)?;
DynamicImage::from_decoder(decoder)
}
}
load_decoder(r, format, limits.clone(), LoadVisitor(limits))
}
pub(crate) fn image_dimensions_impl(path: &Path) -> ImageResult<(u32, u32)> {
let format = image::ImageFormat::from_path(path)?;
let reader = BufReader::new(File::open(path)?);
image_dimensions_with_format_impl(reader, format)
}
#[allow(unused_variables)]
// fin is unused if no features are supported.
pub(crate) fn image_dimensions_with_format_impl<R: BufRead + Seek>(
buffered_read: R,
format: ImageFormat,
) -> ImageResult<(u32, u32)> {
struct DimVisitor;
impl DecoderVisitor for DimVisitor {
type Result = (u32, u32);
fn visit_decoder<'a, D: ImageDecoder<'a>>(self, decoder: D) -> ImageResult<Self::Result> {
Ok(decoder.dimensions())
}
}
load_decoder(buffered_read, format, super::Limits::default(), DimVisitor)
}
#[allow(unused_variables)]
// Most variables when no features are supported
pub(crate) fn save_buffer_impl(
path: &Path,
buf: &[u8],
width: u32,
height: u32,
color: color::ColorType,
) -> ImageResult<()> {
let format = ImageFormat::from_path(path)?;
save_buffer_with_format_impl(path, buf, width, height, color, format)
}
#[allow(unused_variables)]
// Most variables when no features are supported
pub(crate) fn save_buffer_with_format_impl(
path: &Path,
buf: &[u8],
width: u32,
height: u32,
color: color::ColorType,
format: ImageFormat,
) -> ImageResult<()> {
let buffered_file_write = &mut BufWriter::new(File::create(path)?); // always seekable
let format = match format {
#[cfg(feature = "pnm")]
image::ImageFormat::Pnm => {
let ext = path
.extension()
.and_then(|s| s.to_str())
.map_or("".to_string(), |s| s.to_ascii_lowercase());
ImageOutputFormat::Pnm(match &*ext {
"pbm" => pnm::PnmSubtype::Bitmap(pnm::SampleEncoding::Binary),
"pgm" => pnm::PnmSubtype::Graymap(pnm::SampleEncoding::Binary),
"ppm" => pnm::PnmSubtype::Pixmap(pnm::SampleEncoding::Binary),
"pam" => pnm::PnmSubtype::ArbitraryMap,
_ => {
return Err(ImageError::Unsupported(
ImageFormatHint::Exact(format).into(),
))
} // Unsupported Pnm subtype.
})
}
// #[cfg(feature = "hdr")]
// image::ImageFormat::Hdr => hdr::HdrEncoder::new(fout).encode(&[Rgb<f32>], width, height), // usize
format => format.into(),
};
write_buffer_impl(buffered_file_write, buf, width, height, color, format)
}
#[allow(unused_variables)]
// Most variables when no features are supported
pub(crate) fn write_buffer_impl<W: std::io::Write + Seek>(
buffered_write: &mut W,
buf: &[u8],
width: u32,
height: u32,
color: color::ColorType,
format: ImageOutputFormat,
) -> ImageResult<()> {
match format {
#[cfg(feature = "png")]
ImageOutputFormat::Png => {
png::PngEncoder::new(buffered_write).write_image(buf, width, height, color)
}
#[cfg(feature = "jpeg")]
ImageOutputFormat::Jpeg(quality) => {
jpeg::JpegEncoder::new_with_quality(buffered_write, quality)
.write_image(buf, width, height, color)
}
#[cfg(feature = "pnm")]
ImageOutputFormat::Pnm(subtype) => pnm::PnmEncoder::new(buffered_write)
.with_subtype(subtype)
.write_image(buf, width, height, color),
#[cfg(feature = "gif")]
ImageOutputFormat::Gif => {
gif::GifEncoder::new(buffered_write).encode(buf, width, height, color)
}
#[cfg(feature = "ico")]
ImageOutputFormat::Ico => {
ico::IcoEncoder::new(buffered_write).write_image(buf, width, height, color)
}
#[cfg(feature = "bmp")]
ImageOutputFormat::Bmp => {
bmp::BmpEncoder::new(buffered_write).write_image(buf, width, height, color)
}
#[cfg(feature = "farbfeld")]
ImageOutputFormat::Farbfeld => {
farbfeld::FarbfeldEncoder::new(buffered_write).write_image(buf, width, height, color)
}
#[cfg(feature = "tga")]
ImageOutputFormat::Tga => {
tga::TgaEncoder::new(buffered_write).write_image(buf, width, height, color)
}
#[cfg(feature = "exr")]
ImageOutputFormat::OpenExr => {
openexr::OpenExrEncoder::new(buffered_write).write_image(buf, width, height, color)
}
#[cfg(feature = "tiff")]
ImageOutputFormat::Tiff => {
tiff::TiffEncoder::new(buffered_write).write_image(buf, width, height, color)
}
#[cfg(feature = "avif-encoder")]
ImageOutputFormat::Avif => {
avif::AvifEncoder::new(buffered_write).write_image(buf, width, height, color)
}
#[cfg(feature = "qoi")]
ImageOutputFormat::Qoi => {
qoi::QoiEncoder::new(buffered_write).write_image(buf, width, height, color)
}
#[cfg(feature = "webp-encoder")]
ImageOutputFormat::WebP => {
webp::WebPEncoder::new(buffered_write).write_image(buf, width, height, color)
}
image::ImageOutputFormat::Unsupported(msg) => Err(ImageError::Unsupported(
UnsupportedError::from_format_and_kind(
ImageFormatHint::Unknown,
UnsupportedErrorKind::Format(ImageFormatHint::Name(msg)),
),
)),
}
}
static MAGIC_BYTES: [(&[u8], ImageFormat); 23] = [
(b"\x89PNG\r\n\x1a\n", ImageFormat::Png),
(&[0xff, 0xd8, 0xff], ImageFormat::Jpeg),
(b"GIF89a", ImageFormat::Gif),
(b"GIF87a", ImageFormat::Gif),
(b"RIFF", ImageFormat::WebP), // TODO: better magic byte detection, see https://github.com/image-rs/image/issues/660
(b"MM\x00*", ImageFormat::Tiff),
(b"II*\x00", ImageFormat::Tiff),
(b"DDS ", ImageFormat::Dds),
(b"BM", ImageFormat::Bmp),
(&[0, 0, 1, 0], ImageFormat::Ico),
(b"#?RADIANCE", ImageFormat::Hdr),
(b"P1", ImageFormat::Pnm),
(b"P2", ImageFormat::Pnm),
(b"P3", ImageFormat::Pnm),
(b"P4", ImageFormat::Pnm),
(b"P5", ImageFormat::Pnm),
(b"P6", ImageFormat::Pnm),
(b"P7", ImageFormat::Pnm),
(b"farbfeld", ImageFormat::Farbfeld),
(b"\0\0\0 ftypavif", ImageFormat::Avif),
(b"\0\0\0\x1cftypavif", ImageFormat::Avif),
(&[0x76, 0x2f, 0x31, 0x01], ImageFormat::OpenExr), // = &exr::meta::magic_number::BYTES
(b"qoif", ImageFormat::Qoi),
];
/// Guess image format from memory block
///
/// Makes an educated guess about the image format based on the Magic Bytes at the beginning.
/// TGA is not supported by this function.
/// This is not to be trusted on the validity of the whole memory block
pub fn guess_format(buffer: &[u8]) -> ImageResult<ImageFormat> {
match guess_format_impl(buffer) {
Some(format) => Ok(format),
None => Err(ImageError::Unsupported(ImageFormatHint::Unknown.into())),
}
}
pub(crate) fn guess_format_impl(buffer: &[u8]) -> Option<ImageFormat> {
for &(signature, format) in &MAGIC_BYTES {
if buffer.starts_with(signature) {
return Some(format);
}
}
None
}

166
vendor/image/src/io/mod.rs vendored Normal file
View File

@@ -0,0 +1,166 @@
//! Input and output of images.
use std::convert::TryFrom;
use crate::{error, ImageError, ImageResult};
pub(crate) mod free_functions;
mod reader;
pub use self::reader::Reader;
/// Set of supported strict limits for a decoder.
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
#[allow(missing_copy_implementations)]
#[allow(clippy::manual_non_exhaustive)]
pub struct LimitSupport {
_non_exhaustive: (),
}
#[allow(clippy::derivable_impls)]
impl Default for LimitSupport {
fn default() -> LimitSupport {
LimitSupport {
_non_exhaustive: (),
}
}
}
/// Resource limits for decoding.
///
/// Limits can be either *strict* or *non-strict*. Non-strict limits are best-effort
/// limits where the library does not guarantee that limit will not be exceeded. Do note
/// that it is still considered a bug if a non-strict limit is exceeded, however as
/// some of the underlying decoders do not support not support such limits one cannot
/// rely on these limits being supported. For stric limits the library makes a stronger
/// guarantee that the limit will not be exceeded. Exceeding a strict limit is considered
/// a critical bug. If a decoder cannot guarantee that it will uphold a strict limit it
/// *must* fail with `image::error::LimitErrorKind::Unsupported`.
///
/// Currently the only strict limits supported are the `max_image_width` and `max_image_height`
/// limits, however more will be added in the future. [`LimitSupport`] will default to support
/// being false and decoders should enable support for the limits they support in
/// [`ImageDecoder::set_limits`].
///
/// The limit check should only ever fail if a limit will be exceeded or an unsupported strict
/// limit is used.
///
/// [`LimitSupport`]: ./struct.LimitSupport.html
/// [`ImageDecoder::set_limits`]: ../trait.ImageDecoder.html#method.set_limits
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
#[allow(missing_copy_implementations)]
#[allow(clippy::manual_non_exhaustive)]
pub struct Limits {
/// The maximum allowed image width. This limit is strict. The default is no limit.
pub max_image_width: Option<u32>,
/// The maximum allowed image height. This limit is strict. The default is no limit.
pub max_image_height: Option<u32>,
/// The maximum allowed sum of allocations allocated by the decoder at any one time excluding
/// allocator overhead. This limit is non-strict by default and some decoders may ignore it.
/// The default is 512MiB.
pub max_alloc: Option<u64>,
_non_exhaustive: (),
}
impl Default for Limits {
fn default() -> Limits {
Limits {
max_image_width: None,
max_image_height: None,
max_alloc: Some(512 * 1024 * 1024),
_non_exhaustive: (),
}
}
}
impl Limits {
/// Disable all limits.
pub fn no_limits() -> Limits {
Limits {
max_image_width: None,
max_image_height: None,
max_alloc: None,
_non_exhaustive: (),
}
}
/// This function checks that all currently set strict limits are supported.
pub fn check_support(&self, _supported: &LimitSupport) -> ImageResult<()> {
Ok(())
}
/// This function checks the `max_image_width` and `max_image_height` limits given
/// the image width and height.
pub fn check_dimensions(&self, width: u32, height: u32) -> ImageResult<()> {
if let Some(max_width) = self.max_image_width {
if width > max_width {
return Err(ImageError::Limits(error::LimitError::from_kind(
error::LimitErrorKind::DimensionError,
)));
}
}
if let Some(max_height) = self.max_image_height {
if height > max_height {
return Err(ImageError::Limits(error::LimitError::from_kind(
error::LimitErrorKind::DimensionError,
)));
}
}
Ok(())
}
/// This function checks that the current limit allows for reserving the set amount
/// of bytes, it then reduces the limit accordingly.
pub fn reserve(&mut self, amount: u64) -> ImageResult<()> {
if let Some(max_alloc) = self.max_alloc.as_mut() {
if *max_alloc < amount {
return Err(ImageError::Limits(error::LimitError::from_kind(
error::LimitErrorKind::InsufficientMemory,
)));
}
*max_alloc -= amount;
}
Ok(())
}
/// This function acts identically to [`reserve`], but takes a `usize` for convenience.
pub fn reserve_usize(&mut self, amount: usize) -> ImageResult<()> {
match u64::try_from(amount) {
Ok(n) => self.reserve(n),
Err(_) if self.max_alloc.is_some() => Err(ImageError::Limits(
error::LimitError::from_kind(error::LimitErrorKind::InsufficientMemory),
)),
Err(_) => {
// Out of bounds, but we weren't asked to consider any limit.
Ok(())
}
}
}
/// This function increases the `max_alloc` limit with amount. Should only be used
/// together with [`reserve`].
///
/// [`reserve`]: #method.reserve
pub fn free(&mut self, amount: u64) {
if let Some(max_alloc) = self.max_alloc.as_mut() {
*max_alloc = max_alloc.saturating_add(amount);
}
}
/// This function acts identically to [`free`], but takes a `usize` for convenience.
pub fn free_usize(&mut self, amount: usize) {
match u64::try_from(amount) {
Ok(n) => self.free(n),
Err(_) if self.max_alloc.is_some() => {
panic!("max_alloc is set, we should have exited earlier when the reserve failed");
}
Err(_) => {
// Out of bounds, but we weren't asked to consider any limit.
}
}
}
}

239
vendor/image/src/io/reader.rs vendored Normal file
View File

@@ -0,0 +1,239 @@
use std::fs::File;
use std::io::{self, BufRead, BufReader, Cursor, Read, Seek, SeekFrom};
use std::path::Path;
use crate::dynimage::DynamicImage;
use crate::error::{ImageFormatHint, UnsupportedError, UnsupportedErrorKind};
use crate::image::ImageFormat;
use crate::{ImageError, ImageResult};
use super::free_functions;
/// A multi-format image reader.
///
/// Wraps an input reader to facilitate automatic detection of an image's format, appropriate
/// decoding method, and dispatches into the set of supported [`ImageDecoder`] implementations.
///
/// ## Usage
///
/// Opening a file, deducing the format based on the file path automatically, and trying to decode
/// the image contained can be performed by constructing the reader and immediately consuming it.
///
/// ```no_run
/// # use image::ImageError;
/// # use image::io::Reader;
/// # fn main() -> Result<(), ImageError> {
/// let image = Reader::open("path/to/image.png")?
/// .decode()?;
/// # Ok(()) }
/// ```
///
/// It is also possible to make a guess based on the content. This is especially handy if the
/// source is some blob in memory and you have constructed the reader in another way. Here is an
/// example with a `pnm` black-and-white subformat that encodes its pixel matrix with ascii values.
///
/// ```
/// # use image::ImageError;
/// # use image::io::Reader;
/// # fn main() -> Result<(), ImageError> {
/// use std::io::Cursor;
/// use image::ImageFormat;
///
/// let raw_data = b"P1 2 2\n\
/// 0 1\n\
/// 1 0\n";
///
/// let mut reader = Reader::new(Cursor::new(raw_data))
/// .with_guessed_format()
/// .expect("Cursor io never fails");
/// assert_eq!(reader.format(), Some(ImageFormat::Pnm));
///
/// # #[cfg(feature = "pnm")]
/// let image = reader.decode()?;
/// # Ok(()) }
/// ```
///
/// As a final fallback or if only a specific format must be used, the reader always allows manual
/// specification of the supposed image format with [`set_format`].
///
/// [`set_format`]: #method.set_format
/// [`ImageDecoder`]: ../trait.ImageDecoder.html
pub struct Reader<R: Read> {
/// The reader. Should be buffered.
inner: R,
/// The format, if one has been set or deduced.
format: Option<ImageFormat>,
/// Decoding limits
limits: super::Limits,
}
impl<R: Read> Reader<R> {
/// Create a new image reader without a preset format.
///
/// Assumes the reader is already buffered. For optimal performance,
/// consider wrapping the reader with a `BufReader::new()`.
///
/// It is possible to guess the format based on the content of the read object with
/// [`with_guessed_format`], or to set the format directly with [`set_format`].
///
/// [`with_guessed_format`]: #method.with_guessed_format
/// [`set_format`]: method.set_format
pub fn new(buffered_reader: R) -> Self {
Reader {
inner: buffered_reader,
format: None,
limits: super::Limits::default(),
}
}
/// Construct a reader with specified format.
///
/// Assumes the reader is already buffered. For optimal performance,
/// consider wrapping the reader with a `BufReader::new()`.
pub fn with_format(buffered_reader: R, format: ImageFormat) -> Self {
Reader {
inner: buffered_reader,
format: Some(format),
limits: super::Limits::default(),
}
}
/// Get the currently determined format.
pub fn format(&self) -> Option<ImageFormat> {
self.format
}
/// Supply the format as which to interpret the read image.
pub fn set_format(&mut self, format: ImageFormat) {
self.format = Some(format);
}
/// Remove the current information on the image format.
///
/// Note that many operations require format information to be present and will return e.g. an
/// `ImageError::Unsupported` when the image format has not been set.
pub fn clear_format(&mut self) {
self.format = None;
}
/// Disable all decoding limits.
pub fn no_limits(&mut self) {
self.limits = super::Limits::no_limits();
}
/// Set a custom set of decoding limits.
pub fn limits(&mut self, limits: super::Limits) {
self.limits = limits;
}
/// Unwrap the reader.
pub fn into_inner(self) -> R {
self.inner
}
}
impl Reader<BufReader<File>> {
/// Open a file to read, format will be guessed from path.
///
/// This will not attempt any io operation on the opened file.
///
/// If you want to inspect the content for a better guess on the format, which does not depend
/// on file extensions, follow this call with a call to [`with_guessed_format`].
///
/// [`with_guessed_format`]: #method.with_guessed_format
pub fn open<P>(path: P) -> io::Result<Self>
where
P: AsRef<Path>,
{
Self::open_impl(path.as_ref())
}
fn open_impl(path: &Path) -> io::Result<Self> {
Ok(Reader {
inner: BufReader::new(File::open(path)?),
format: ImageFormat::from_path(path).ok(),
limits: super::Limits::default(),
})
}
}
impl<R: BufRead + Seek> Reader<R> {
/// Make a format guess based on the content, replacing it on success.
///
/// Returns `Ok` with the guess if no io error occurs. Additionally, replaces the current
/// format if the guess was successful. If the guess was unable to determine a format then
/// the current format of the reader is unchanged.
///
/// Returns an error if the underlying reader fails. The format is unchanged. The error is a
/// `std::io::Error` and not `ImageError` since the only error case is an error when the
/// underlying reader seeks.
///
/// When an error occurs, the reader may not have been properly reset and it is potentially
/// hazardous to continue with more io.
///
/// ## Usage
///
/// This supplements the path based type deduction from [`open`](Reader::open) with content based deduction.
/// This is more common in Linux and UNIX operating systems and also helpful if the path can
/// not be directly controlled.
///
/// ```no_run
/// # use image::ImageError;
/// # use image::io::Reader;
/// # fn main() -> Result<(), ImageError> {
/// let image = Reader::open("image.unknown")?
/// .with_guessed_format()?
/// .decode()?;
/// # Ok(()) }
/// ```
pub fn with_guessed_format(mut self) -> io::Result<Self> {
let format = self.guess_format()?;
// Replace format if found, keep current state if not.
self.format = format.or(self.format);
Ok(self)
}
fn guess_format(&mut self) -> io::Result<Option<ImageFormat>> {
let mut start = [0; 16];
// Save current offset, read start, restore offset.
let cur = self.inner.stream_position()?;
let len = io::copy(
// Accept shorter files but read at most 16 bytes.
&mut self.inner.by_ref().take(16),
&mut Cursor::new(&mut start[..]),
)?;
self.inner.seek(SeekFrom::Start(cur))?;
Ok(free_functions::guess_format_impl(&start[..len as usize]))
}
/// Read the image dimensions.
///
/// Uses the current format to construct the correct reader for the format.
///
/// If no format was determined, returns an `ImageError::Unsupported`.
pub fn into_dimensions(mut self) -> ImageResult<(u32, u32)> {
let format = self.require_format()?;
free_functions::image_dimensions_with_format_impl(self.inner, format)
}
/// Read the image (replaces `load`).
///
/// Uses the current format to construct the correct reader for the format.
///
/// If no format was determined, returns an `ImageError::Unsupported`.
pub fn decode(mut self) -> ImageResult<DynamicImage> {
let format = self.require_format()?;
free_functions::load_inner(self.inner, self.limits, format)
}
fn require_format(&mut self) -> ImageResult<ImageFormat> {
self.format.ok_or_else(|| {
ImageError::Unsupported(UnsupportedError::from_format_and_kind(
ImageFormatHint::Unknown,
UnsupportedErrorKind::Format(ImageFormatHint::Unknown),
))
})
}
}

310
vendor/image/src/lib.rs vendored Normal file
View File

@@ -0,0 +1,310 @@
//! # Overview
//!
//! This crate provides native rust implementations of image encoding and decoding as well as some
//! basic image manipulation functions. Additional documentation can currently also be found in the
//! [README.md file which is most easily viewed on
//! github](https://github.com/image-rs/image/blob/master/README.md).
//!
//! There are two core problems for which this library provides solutions: a unified interface for image
//! encodings and simple generic buffers for their content. It's possible to use either feature
//! without the other. The focus is on a small and stable set of common operations that can be
//! supplemented by other specialized crates. The library also prefers safe solutions with few
//! dependencies.
//!
//! # High level API
//!
//! Load images using [`io::Reader`]:
//!
//! ```rust,no_run
//! use std::io::Cursor;
//! use image::io::Reader as ImageReader;
//! # fn main() -> Result<(), image::ImageError> {
//! # let bytes = vec![0u8];
//!
//! let img = ImageReader::open("myimage.png")?.decode()?;
//! let img2 = ImageReader::new(Cursor::new(bytes)).with_guessed_format()?.decode()?;
//! # Ok(())
//! # }
//! ```
//!
//! And save them using [`save`] or [`write_to`] methods:
//!
//! ```rust,no_run
//! # use std::io::{Write, Cursor};
//! # use image::{DynamicImage, ImageOutputFormat};
//! # #[cfg(feature = "png")]
//! # fn main() -> Result<(), image::ImageError> {
//! # let img: DynamicImage = unimplemented!();
//! # let img2: DynamicImage = unimplemented!();
//! img.save("empty.jpg")?;
//!
//! let mut bytes: Vec<u8> = Vec::new();
//! img2.write_to(&mut Cursor::new(&mut bytes), image::ImageOutputFormat::Png)?;
//! # Ok(())
//! # }
//! # #[cfg(not(feature = "png"))] fn main() {}
//! ```
//!
//! With default features, the crate includes support for [many common image formats](codecs/index.html#supported-formats).
//!
//! [`save`]: enum.DynamicImage.html#method.save
//! [`write_to`]: enum.DynamicImage.html#method.write_to
//! [`io::Reader`]: io/struct.Reader.html
//!
//! # Image buffers
//!
//! The two main types for storing images:
//! * [`ImageBuffer`] which holds statically typed image contents.
//! * [`DynamicImage`] which is an enum over the supported ImageBuffer formats
//! and supports conversions between them.
//!
//! As well as a few more specialized options:
//! * [`GenericImage`] trait for a mutable image buffer.
//! * [`GenericImageView`] trait for read only references to a GenericImage.
//! * [`flat`] module containing types for interoperability with generic channel
//! matrices and foreign interfaces.
//!
//! [`GenericImageView`]: trait.GenericImageView.html
//! [`GenericImage`]: trait.GenericImage.html
//! [`ImageBuffer`]: struct.ImageBuffer.html
//! [`DynamicImage`]: enum.DynamicImage.html
//! [`flat`]: flat/index.html
//!
//! # Low level encoding/decoding API
//!
//! Implementations of [`ImageEncoder`] provides low level control over encoding:
//! ```rust,no_run
//! # use std::io::Write;
//! # use image::DynamicImage;
//! # use image::ImageEncoder;
//! # #[cfg(feature = "jpeg")]
//! # fn main() -> Result<(), image::ImageError> {
//! # use image::codecs::jpeg::JpegEncoder;
//! # let img: DynamicImage = unimplemented!();
//! # let writer: Box<dyn Write> = unimplemented!();
//! let encoder = JpegEncoder::new_with_quality(&mut writer, 95);
//! img.write_with_encoder(encoder)?;
//! # Ok(())
//! # }
//! # #[cfg(not(feature = "jpeg"))] fn main() {}
//! ```
//! While [`ImageDecoder`] and [`ImageDecoderRect`] give access to more advanced decoding options:
//!
//! ```rust,no_run
//! # use std::io::Read;
//! # use image::DynamicImage;
//! # use image::ImageDecoder;
//! # #[cfg(feature = "png")]
//! # fn main() -> Result<(), image::ImageError> {
//! # use image::codecs::png::PngDecoder;
//! # let img: DynamicImage = unimplemented!();
//! # let reader: Box<dyn Read> = unimplemented!();
//! let decoder = PngDecoder::new(&mut reader)?;
//! let icc = decoder.icc_profile();
//! let img = DynamicImage::from_decoder(decoder)?;
//! # Ok(())
//! # }
//! # #[cfg(not(feature = "png"))] fn main() {}
//! ```
//!
//! [`DynamicImage::from_decoder`]: enum.DynamicImage.html#method.from_decoder
//! [`ImageDecoderRect`]: trait.ImageDecoderRect.html
//! [`ImageDecoder`]: trait.ImageDecoder.html
//! [`ImageEncoder`]: trait.ImageEncoder.html
#![warn(missing_docs)]
#![warn(unused_qualifications)]
#![deny(unreachable_pub)]
#![deny(deprecated)]
#![deny(missing_copy_implementations)]
#![cfg_attr(all(test, feature = "benchmarks"), feature(test))]
// it's a bit of a pain otherwise
#![allow(clippy::many_single_char_names)]
// it's a backwards compatibility break
#![allow(clippy::wrong_self_convention, clippy::enum_variant_names)]
#[cfg(all(test, feature = "benchmarks"))]
extern crate test;
#[cfg(test)]
#[macro_use]
extern crate quickcheck;
pub use crate::color::{ColorType, ExtendedColorType};
pub use crate::color::{Luma, LumaA, Rgb, Rgba};
pub use crate::error::{ImageError, ImageResult};
pub use crate::image::{
AnimationDecoder,
GenericImage,
GenericImageView,
ImageDecoder,
ImageDecoderRect,
ImageEncoder,
ImageFormat,
ImageOutputFormat,
// Iterators
Pixels,
Progress,
SubImage,
};
pub use crate::buffer_::{
GrayAlphaImage,
GrayImage,
// Image types
ImageBuffer,
Rgb32FImage,
RgbImage,
Rgba32FImage,
RgbaImage,
};
pub use crate::flat::FlatSamples;
// Traits
pub use crate::traits::{EncodableLayout, Pixel, PixelWithColorType, Primitive};
// Opening and loading images
pub use crate::dynimage::{
image_dimensions, load_from_memory, load_from_memory_with_format, open, save_buffer,
save_buffer_with_format, write_buffer_with_format,
};
pub use crate::io::free_functions::{guess_format, load};
pub use crate::dynimage::DynamicImage;
pub use crate::animation::{Delay, Frame, Frames};
// More detailed error type
pub mod error;
/// Iterators and other auxiliary structure for the `ImageBuffer` type.
pub mod buffer {
// Only those not exported at the top-level
pub use crate::buffer_::{
ConvertBuffer, EnumeratePixels, EnumeratePixelsMut, EnumerateRows, EnumerateRowsMut,
Pixels, PixelsMut, Rows, RowsMut,
};
}
// Math utils
pub mod math;
// Image processing functions
pub mod imageops;
// Io bindings
pub mod io;
// Buffer representations for ffi.
pub mod flat;
/// Encoding and decoding for various image file formats.
///
/// # Supported formats
///
/// <!--- NOTE: Make sure to keep this table in sync with the README -->
///
/// | Format | Decoding | Encoding |
/// | ------ | -------- | -------- |
/// | AVIF | Only 8-bit | Lossy |
/// | BMP | Yes | Rgb8, Rgba8, Gray8, GrayA8 |
/// | DDS | DXT1, DXT3, DXT5 | No |
/// | Farbfeld | Yes | Yes |
/// | GIF | Yes | Yes |
/// | ICO | Yes | Yes |
/// | JPEG | Baseline and progressive | Baseline JPEG |
/// | OpenEXR | Rgb32F, Rgba32F (no dwa compression) | Rgb32F, Rgba32F (no dwa compression) |
/// | PNG | All supported color types | Same as decoding |
/// | PNM | PBM, PGM, PPM, standard PAM | Yes |
/// | QOI | Yes | Yes |
/// | TGA | Yes | Rgb8, Rgba8, Bgr8, Bgra8, Gray8, GrayA8 |
/// | TIFF | Baseline(no fax support) + LZW + PackBits | Rgb8, Rgba8, Gray8 |
/// | WebP | Yes | Rgb8, Rgba8 |
///
/// ## A note on format specific features
///
/// One of the main goals of `image` is stability, in runtime but also for programmers. This
/// ensures that performance as well as safety fixes reach a majority of its user base with little
/// effort. Re-exporting all details of its dependencies would run counter to this goal as it
/// linked _all_ major version bumps between them and `image`. As such, we are wary of exposing too
/// many details, or configuration options, that are not shared between different image formats.
///
/// Nevertheless, the advantage of precise control is hard to ignore. We will thus consider
/// _wrappers_, not direct re-exports, in either of the following cases:
///
/// 1. A standard specifies that configuration _x_ is required for decoders/encoders and there
/// exists an essentially canonical way to control it.
/// 2. At least two different implementations agree on some (sub-)set of features in practice.
/// 3. A technical argument including measurements of the performance, space benefits, or otherwise
/// objectively quantified benefits can be made, and the added interface is unlikely to require
/// breaking changes.
///
/// Features that fulfill two or more criteria are preferred.
///
/// Re-exports of dependencies that reach version `1` will be discussed when it happens.
pub mod codecs {
#[cfg(any(feature = "avif-encoder", feature = "avif-decoder"))]
pub mod avif;
#[cfg(feature = "bmp")]
pub mod bmp;
#[cfg(feature = "dds")]
pub mod dds;
#[cfg(feature = "dxt")]
#[deprecated = "DXT support will be removed or reworked in a future version. Prefer the `squish` crate instead. See https://github.com/image-rs/image/issues/1623"]
pub mod dxt;
#[cfg(feature = "farbfeld")]
pub mod farbfeld;
#[cfg(feature = "gif")]
pub mod gif;
#[cfg(feature = "hdr")]
pub mod hdr;
#[cfg(feature = "ico")]
pub mod ico;
#[cfg(feature = "jpeg")]
pub mod jpeg;
#[cfg(feature = "exr")]
pub mod openexr;
#[cfg(feature = "png")]
pub mod png;
#[cfg(feature = "pnm")]
pub mod pnm;
#[cfg(feature = "qoi")]
pub mod qoi;
#[cfg(feature = "tga")]
pub mod tga;
#[cfg(feature = "tiff")]
pub mod tiff;
#[cfg(any(feature = "webp", feature = "webp-encoder"))]
pub mod webp;
}
mod animation;
#[path = "buffer.rs"]
mod buffer_;
mod color;
mod dynimage;
mod image;
mod traits;
mod utils;
// Can't use the macro-call itself within the `doc` attribute. So force it to eval it as part of
// the macro invocation.
//
// The inspiration for the macro and implementation is from
// <https://github.com/GuillaumeGomez/doc-comment>
//
// MIT License
//
// Copyright (c) 2018 Guillaume Gomez
macro_rules! insert_as_doc {
{ $content:expr } => {
#[allow(unused_doc_comments)]
#[doc = $content] extern { }
}
}
// Provides the README.md as doc, to ensure the example works!
insert_as_doc!(include_str!("../README.md"));

6
vendor/image/src/math/mod.rs vendored Normal file
View File

@@ -0,0 +1,6 @@
//! Mathematical helper functions and types.
mod rect;
mod utils;
pub use self::rect::Rect;
pub(super) use utils::resize_dimensions;

12
vendor/image/src/math/rect.rs vendored Normal file
View File

@@ -0,0 +1,12 @@
/// A Rectangle defined by its top left corner, width and height.
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]
pub struct Rect {
/// The x coordinate of the top left corner.
pub x: u32,
/// The y coordinate of the top left corner.
pub y: u32,
/// The rectangle's width.
pub width: u32,
/// The rectangle's height.
pub height: u32,
}

123
vendor/image/src/math/utils.rs vendored Normal file
View File

@@ -0,0 +1,123 @@
//! Shared mathematical utility functions.
use std::cmp::max;
/// Calculates the width and height an image should be resized to.
/// This preserves aspect ratio, and based on the `fill` parameter
/// will either fill the dimensions to fit inside the smaller constraint
/// (will overflow the specified bounds on one axis to preserve
/// aspect ratio), or will shrink so that both dimensions are
/// completely contained within the given `width` and `height`,
/// with empty space on one axis.
pub(crate) fn resize_dimensions(
width: u32,
height: u32,
nwidth: u32,
nheight: u32,
fill: bool,
) -> (u32, u32) {
let wratio = nwidth as f64 / width as f64;
let hratio = nheight as f64 / height as f64;
let ratio = if fill {
f64::max(wratio, hratio)
} else {
f64::min(wratio, hratio)
};
let nw = max((width as f64 * ratio).round() as u64, 1);
let nh = max((height as f64 * ratio).round() as u64, 1);
if nw > u64::from(u32::MAX) {
let ratio = u32::MAX as f64 / width as f64;
(u32::MAX, max((height as f64 * ratio).round() as u32, 1))
} else if nh > u64::from(u32::MAX) {
let ratio = u32::MAX as f64 / height as f64;
(max((width as f64 * ratio).round() as u32, 1), u32::MAX)
} else {
(nw as u32, nh as u32)
}
}
#[cfg(test)]
mod test {
quickcheck! {
fn resize_bounds_correctly_width(old_w: u32, new_w: u32) -> bool {
if old_w == 0 || new_w == 0 { return true; }
// In this case, the scaling is limited by scaling of height.
// We could check that case separately but it does not conform to the same expectation.
if new_w as u64 * 400u64 >= old_w as u64 * u64::from(u32::MAX) { return true; }
let result = super::resize_dimensions(old_w, 400, new_w, ::std::u32::MAX, false);
let exact = (400 as f64 * new_w as f64 / old_w as f64).round() as u32;
result.0 == new_w && result.1 == exact.max(1)
}
}
quickcheck! {
fn resize_bounds_correctly_height(old_h: u32, new_h: u32) -> bool {
if old_h == 0 || new_h == 0 { return true; }
// In this case, the scaling is limited by scaling of width.
// We could check that case separately but it does not conform to the same expectation.
if 400u64 * new_h as u64 >= old_h as u64 * u64::from(u32::MAX) { return true; }
let result = super::resize_dimensions(400, old_h, ::std::u32::MAX, new_h, false);
let exact = (400 as f64 * new_h as f64 / old_h as f64).round() as u32;
result.1 == new_h && result.0 == exact.max(1)
}
}
#[test]
fn resize_handles_fill() {
let result = super::resize_dimensions(100, 200, 200, 500, true);
assert!(result.0 == 250);
assert!(result.1 == 500);
let result = super::resize_dimensions(200, 100, 500, 200, true);
assert!(result.0 == 500);
assert!(result.1 == 250);
}
#[test]
fn resize_never_rounds_to_zero() {
let result = super::resize_dimensions(1, 150, 128, 128, false);
assert!(result.0 > 0);
assert!(result.1 > 0);
}
#[test]
fn resize_handles_overflow() {
let result = super::resize_dimensions(100, ::std::u32::MAX, 200, ::std::u32::MAX, true);
assert!(result.0 == 100);
assert!(result.1 == ::std::u32::MAX);
let result = super::resize_dimensions(::std::u32::MAX, 100, ::std::u32::MAX, 200, true);
assert!(result.0 == ::std::u32::MAX);
assert!(result.1 == 100);
}
#[test]
fn resize_rounds() {
// Only truncation will result in (3840, 2229) and (2160, 3719)
let result = super::resize_dimensions(4264, 2476, 3840, 2160, true);
assert_eq!(result, (3840, 2230));
let result = super::resize_dimensions(2476, 4264, 2160, 3840, false);
assert_eq!(result, (2160, 3720));
}
#[test]
fn resize_handles_zero() {
let result = super::resize_dimensions(0, 100, 100, 100, false);
assert_eq!(result, (1, 100));
let result = super::resize_dimensions(100, 0, 100, 100, false);
assert_eq!(result, (100, 1));
let result = super::resize_dimensions(100, 100, 0, 100, false);
assert_eq!(result, (1, 1));
let result = super::resize_dimensions(100, 100, 100, 0, false);
assert_eq!(result, (1, 1));
}
}

370
vendor/image/src/traits.rs vendored Normal file
View File

@@ -0,0 +1,370 @@
//! This module provides useful traits that were deprecated in rust
// Note copied from the stdlib under MIT license
use num_traits::{Bounded, Num, NumCast};
use std::ops::AddAssign;
use crate::color::{ColorType, Luma, LumaA, Rgb, Rgba};
/// Types which are safe to treat as an immutable byte slice in a pixel layout
/// for image encoding.
pub trait EncodableLayout: seals::EncodableLayout {
/// Get the bytes of this value.
fn as_bytes(&self) -> &[u8];
}
impl EncodableLayout for [u8] {
fn as_bytes(&self) -> &[u8] {
bytemuck::cast_slice(self)
}
}
impl EncodableLayout for [u16] {
fn as_bytes(&self) -> &[u8] {
bytemuck::cast_slice(self)
}
}
impl EncodableLayout for [f32] {
fn as_bytes(&self) -> &[u8] {
bytemuck::cast_slice(self)
}
}
/// The type of each channel in a pixel. For example, this can be `u8`, `u16`, `f32`.
// TODO rename to `PixelComponent`? Split up into separate traits? Seal?
pub trait Primitive: Copy + NumCast + Num + PartialOrd<Self> + Clone + Bounded {
/// The maximum value for this type of primitive within the context of color.
/// For floats, the maximum is `1.0`, whereas the integer types inherit their usual maximum values.
const DEFAULT_MAX_VALUE: Self;
/// The minimum value for this type of primitive within the context of color.
/// For floats, the minimum is `0.0`, whereas the integer types inherit their usual minimum values.
const DEFAULT_MIN_VALUE: Self;
}
macro_rules! declare_primitive {
($base:ty: ($from:expr)..$to:expr) => {
impl Primitive for $base {
const DEFAULT_MAX_VALUE: Self = $to;
const DEFAULT_MIN_VALUE: Self = $from;
}
};
}
declare_primitive!(usize: (0)..Self::MAX);
declare_primitive!(u8: (0)..Self::MAX);
declare_primitive!(u16: (0)..Self::MAX);
declare_primitive!(u32: (0)..Self::MAX);
declare_primitive!(u64: (0)..Self::MAX);
declare_primitive!(isize: (Self::MIN)..Self::MAX);
declare_primitive!(i8: (Self::MIN)..Self::MAX);
declare_primitive!(i16: (Self::MIN)..Self::MAX);
declare_primitive!(i32: (Self::MIN)..Self::MAX);
declare_primitive!(i64: (Self::MIN)..Self::MAX);
declare_primitive!(f32: (0.0)..1.0);
declare_primitive!(f64: (0.0)..1.0);
/// An Enlargable::Larger value should be enough to calculate
/// the sum (average) of a few hundred or thousand Enlargeable values.
pub trait Enlargeable: Sized + Bounded + NumCast {
type Larger: Copy + NumCast + Num + PartialOrd<Self::Larger> + Clone + Bounded + AddAssign;
fn clamp_from(n: Self::Larger) -> Self {
if n > Self::max_value().to_larger() {
Self::max_value()
} else if n < Self::min_value().to_larger() {
Self::min_value()
} else {
NumCast::from(n).unwrap()
}
}
fn to_larger(self) -> Self::Larger {
NumCast::from(self).unwrap()
}
}
impl Enlargeable for u8 {
type Larger = u32;
}
impl Enlargeable for u16 {
type Larger = u32;
}
impl Enlargeable for u32 {
type Larger = u64;
}
impl Enlargeable for u64 {
type Larger = u128;
}
impl Enlargeable for usize {
// Note: On 32-bit architectures, u64 should be enough here.
type Larger = u128;
}
impl Enlargeable for i8 {
type Larger = i32;
}
impl Enlargeable for i16 {
type Larger = i32;
}
impl Enlargeable for i32 {
type Larger = i64;
}
impl Enlargeable for i64 {
type Larger = i128;
}
impl Enlargeable for isize {
// Note: On 32-bit architectures, i64 should be enough here.
type Larger = i128;
}
impl Enlargeable for f32 {
type Larger = f64;
}
impl Enlargeable for f64 {
type Larger = f64;
}
/// Linear interpolation without involving floating numbers.
pub trait Lerp: Bounded + NumCast {
type Ratio: Primitive;
fn lerp(a: Self, b: Self, ratio: Self::Ratio) -> Self {
let a = <Self::Ratio as NumCast>::from(a).unwrap();
let b = <Self::Ratio as NumCast>::from(b).unwrap();
let res = a + (b - a) * ratio;
if res > NumCast::from(Self::max_value()).unwrap() {
Self::max_value()
} else if res < NumCast::from(0).unwrap() {
NumCast::from(0).unwrap()
} else {
NumCast::from(res).unwrap()
}
}
}
impl Lerp for u8 {
type Ratio = f32;
}
impl Lerp for u16 {
type Ratio = f32;
}
impl Lerp for u32 {
type Ratio = f64;
}
impl Lerp for f32 {
type Ratio = f32;
fn lerp(a: Self, b: Self, ratio: Self::Ratio) -> Self {
a + (b - a) * ratio
}
}
/// The pixel with an associated `ColorType`.
/// Not all possible pixels represent one of the predefined `ColorType`s.
pub trait PixelWithColorType: Pixel + self::private::SealedPixelWithColorType {
/// This pixel has the format of one of the predefined `ColorType`s,
/// such as `Rgb8`, `La16` or `Rgba32F`.
/// This is needed for automatically detecting
/// a color format when saving an image as a file.
const COLOR_TYPE: ColorType;
}
impl PixelWithColorType for Rgb<u8> {
const COLOR_TYPE: ColorType = ColorType::Rgb8;
}
impl PixelWithColorType for Rgb<u16> {
const COLOR_TYPE: ColorType = ColorType::Rgb16;
}
impl PixelWithColorType for Rgb<f32> {
const COLOR_TYPE: ColorType = ColorType::Rgb32F;
}
impl PixelWithColorType for Rgba<u8> {
const COLOR_TYPE: ColorType = ColorType::Rgba8;
}
impl PixelWithColorType for Rgba<u16> {
const COLOR_TYPE: ColorType = ColorType::Rgba16;
}
impl PixelWithColorType for Rgba<f32> {
const COLOR_TYPE: ColorType = ColorType::Rgba32F;
}
impl PixelWithColorType for Luma<u8> {
const COLOR_TYPE: ColorType = ColorType::L8;
}
impl PixelWithColorType for Luma<u16> {
const COLOR_TYPE: ColorType = ColorType::L16;
}
impl PixelWithColorType for LumaA<u8> {
const COLOR_TYPE: ColorType = ColorType::La8;
}
impl PixelWithColorType for LumaA<u16> {
const COLOR_TYPE: ColorType = ColorType::La16;
}
/// Prevents down-stream users from implementing the `Primitive` trait
mod private {
use crate::color::*;
pub trait SealedPixelWithColorType {}
impl SealedPixelWithColorType for Rgb<u8> {}
impl SealedPixelWithColorType for Rgb<u16> {}
impl SealedPixelWithColorType for Rgb<f32> {}
impl SealedPixelWithColorType for Rgba<u8> {}
impl SealedPixelWithColorType for Rgba<u16> {}
impl SealedPixelWithColorType for Rgba<f32> {}
impl SealedPixelWithColorType for Luma<u8> {}
impl SealedPixelWithColorType for LumaA<u8> {}
impl SealedPixelWithColorType for Luma<u16> {}
impl SealedPixelWithColorType for LumaA<u16> {}
}
/// A generalized pixel.
///
/// A pixel object is usually not used standalone but as a view into an image buffer.
pub trait Pixel: Copy + Clone {
/// The scalar type that is used to store each channel in this pixel.
type Subpixel: Primitive;
/// The number of channels of this pixel type.
const CHANNEL_COUNT: u8;
/// Returns the components as a slice.
fn channels(&self) -> &[Self::Subpixel];
/// Returns the components as a mutable slice
fn channels_mut(&mut self) -> &mut [Self::Subpixel];
/// A string that can help to interpret the meaning each channel
/// See [gimp babl](http://gegl.org/babl/).
const COLOR_MODEL: &'static str;
/// Returns the channels of this pixel as a 4 tuple. If the pixel
/// has less than 4 channels the remainder is filled with the maximum value
#[deprecated(since = "0.24.0", note = "Use `channels()` or `channels_mut()`")]
fn channels4(
&self,
) -> (
Self::Subpixel,
Self::Subpixel,
Self::Subpixel,
Self::Subpixel,
);
/// Construct a pixel from the 4 channels a, b, c and d.
/// If the pixel does not contain 4 channels the extra are ignored.
#[deprecated(
since = "0.24.0",
note = "Use the constructor of the pixel, for example `Rgba([r,g,b,a])` or `Pixel::from_slice`"
)]
fn from_channels(
a: Self::Subpixel,
b: Self::Subpixel,
c: Self::Subpixel,
d: Self::Subpixel,
) -> Self;
/// Returns a view into a slice.
///
/// Note: The slice length is not checked on creation. Thus the caller has to ensure
/// that the slice is long enough to prevent panics if the pixel is used later on.
fn from_slice(slice: &[Self::Subpixel]) -> &Self;
/// Returns mutable view into a mutable slice.
///
/// Note: The slice length is not checked on creation. Thus the caller has to ensure
/// that the slice is long enough to prevent panics if the pixel is used later on.
fn from_slice_mut(slice: &mut [Self::Subpixel]) -> &mut Self;
/// Convert this pixel to RGB
fn to_rgb(&self) -> Rgb<Self::Subpixel>;
/// Convert this pixel to RGB with an alpha channel
fn to_rgba(&self) -> Rgba<Self::Subpixel>;
/// Convert this pixel to luma
fn to_luma(&self) -> Luma<Self::Subpixel>;
/// Convert this pixel to luma with an alpha channel
fn to_luma_alpha(&self) -> LumaA<Self::Subpixel>;
/// Apply the function ```f``` to each channel of this pixel.
fn map<F>(&self, f: F) -> Self
where
F: FnMut(Self::Subpixel) -> Self::Subpixel;
/// Apply the function ```f``` to each channel of this pixel.
fn apply<F>(&mut self, f: F)
where
F: FnMut(Self::Subpixel) -> Self::Subpixel;
/// Apply the function ```f``` to each channel except the alpha channel.
/// Apply the function ```g``` to the alpha channel.
fn map_with_alpha<F, G>(&self, f: F, g: G) -> Self
where
F: FnMut(Self::Subpixel) -> Self::Subpixel,
G: FnMut(Self::Subpixel) -> Self::Subpixel;
/// Apply the function ```f``` to each channel except the alpha channel.
/// Apply the function ```g``` to the alpha channel. Works in-place.
fn apply_with_alpha<F, G>(&mut self, f: F, g: G)
where
F: FnMut(Self::Subpixel) -> Self::Subpixel,
G: FnMut(Self::Subpixel) -> Self::Subpixel;
/// Apply the function ```f``` to each channel except the alpha channel.
fn map_without_alpha<F>(&self, f: F) -> Self
where
F: FnMut(Self::Subpixel) -> Self::Subpixel,
{
let mut this = *self;
this.apply_with_alpha(f, |x| x);
this
}
/// Apply the function ```f``` to each channel except the alpha channel.
/// Works in place.
fn apply_without_alpha<F>(&mut self, f: F)
where
F: FnMut(Self::Subpixel) -> Self::Subpixel,
{
self.apply_with_alpha(f, |x| x);
}
/// Apply the function ```f``` to each channel of this pixel and
/// ```other``` pairwise.
fn map2<F>(&self, other: &Self, f: F) -> Self
where
F: FnMut(Self::Subpixel, Self::Subpixel) -> Self::Subpixel;
/// Apply the function ```f``` to each channel of this pixel and
/// ```other``` pairwise. Works in-place.
fn apply2<F>(&mut self, other: &Self, f: F)
where
F: FnMut(Self::Subpixel, Self::Subpixel) -> Self::Subpixel;
/// Invert this pixel
fn invert(&mut self);
/// Blend the color of a given pixel into ourself, taking into account alpha channels
fn blend(&mut self, other: &Self);
}
/// Private module for supertraits of sealed traits.
mod seals {
pub trait EncodableLayout {}
impl EncodableLayout for [u8] {}
impl EncodableLayout for [u16] {}
impl EncodableLayout for [f32] {}
}

128
vendor/image/src/utils/mod.rs vendored Normal file
View File

@@ -0,0 +1,128 @@
//! Utilities
use std::iter::repeat;
#[inline(always)]
pub(crate) fn expand_packed<F>(buf: &mut [u8], channels: usize, bit_depth: u8, mut func: F)
where
F: FnMut(u8, &mut [u8]),
{
let pixels = buf.len() / channels * bit_depth as usize;
let extra = pixels % 8;
let entries = pixels / 8
+ match extra {
0 => 0,
_ => 1,
};
let mask = ((1u16 << bit_depth) - 1) as u8;
let i = (0..entries)
.rev() // Reverse iterator
.flat_map(|idx|
// This has to be reversed to
(0..8/bit_depth).map(|i| i*bit_depth).zip(repeat(idx)))
.skip(extra);
let buf_len = buf.len();
let j_inv = (channels..buf_len).step_by(channels);
for ((shift, i), j_inv) in i.zip(j_inv) {
let j = buf_len - j_inv;
let pixel = (buf[i] & (mask << shift)) >> shift;
func(pixel, &mut buf[j..(j + channels)])
}
}
/// Expand a buffer of packed 1, 2, or 4 bits integers into u8's. Assumes that
/// every `row_size` entries there are padding bits up to the next byte boundary.
#[allow(dead_code)]
// When no image formats that use it are enabled
pub(crate) fn expand_bits(bit_depth: u8, row_size: u32, buf: &[u8]) -> Vec<u8> {
// Note: this conversion assumes that the scanlines begin on byte boundaries
let mask = (1u8 << bit_depth as usize) - 1;
let scaling_factor = 255 / ((1 << bit_depth as usize) - 1);
let bit_width = row_size * u32::from(bit_depth);
let skip = if bit_width % 8 == 0 {
0
} else {
(8 - bit_width % 8) / u32::from(bit_depth)
};
let row_len = row_size + skip;
let mut p = Vec::new();
let mut i = 0;
for v in buf {
for shift_inv in 1..=8 / bit_depth {
let shift = 8 - bit_depth * shift_inv;
// skip the pixels that can be neglected because scanlines should
// start at byte boundaries
if i % (row_len as usize) < (row_size as usize) {
let pixel = (v & mask << shift as usize) >> shift as usize;
p.push(pixel * scaling_factor);
}
i += 1;
}
}
p
}
/// Checks if the provided dimensions would cause an overflow.
#[allow(dead_code)]
// When no image formats that use it are enabled
pub(crate) fn check_dimension_overflow(width: u32, height: u32, bytes_per_pixel: u8) -> bool {
width as u64 * height as u64 > std::u64::MAX / bytes_per_pixel as u64
}
#[allow(dead_code)]
// When no image formats that use it are enabled
pub(crate) fn vec_copy_to_u8<T>(vec: &[T]) -> Vec<u8>
where
T: bytemuck::Pod,
{
bytemuck::cast_slice(vec).to_owned()
}
#[inline]
pub(crate) fn clamp<N>(a: N, min: N, max: N) -> N
where
N: PartialOrd,
{
if a < min {
min
} else if a > max {
max
} else {
a
}
}
#[cfg(test)]
mod test {
#[test]
fn gray_to_luma8_skip() {
let check = |bit_depth, w, from, to| {
assert_eq!(super::expand_bits(bit_depth, w, from), to);
};
// Bit depth 1, skip is more than half a byte
check(
1,
10,
&[0b11110000, 0b11000000, 0b00001111, 0b11000000],
vec![
255, 255, 255, 255, 0, 0, 0, 0, 255, 255, 0, 0, 0, 0, 255, 255, 255, 255, 255, 255,
],
);
// Bit depth 2, skip is more than half a byte
check(
2,
5,
&[0b11110000, 0b11000000, 0b00001111, 0b11000000],
vec![255, 255, 0, 0, 255, 0, 0, 255, 255, 255],
);
// Bit depth 2, skip is 0
check(
2,
4,
&[0b11110000, 0b00001111],
vec![255, 255, 0, 0, 0, 0, 255, 255],
);
// Bit depth 4, skip is half a byte
check(4, 1, &[0b11110011, 0b00001100], vec![255, 0]);
}
}