Compare commits

115 Commits

Author SHA1 Message Date
40e7d88fd0 Add NRes format documentation and decompression algorithms
Some checks failed
Test / cargo test (push) Failing after 40s
- Created `huffman_decompression.md` detailing the Huffman decompression algorithm used in NRes, including context structure, block modes, and decoding methods.
- Created `overview.md` for the NRes format, outlining file structure, header details, file entries, and packing algorithms.
- Updated `mkdocs.yml` to include new documentation files in the navigation structure.
2026-02-05 01:32:24 +04:00
afe6b9a29b feat: remove Rust project 2026-02-05 00:37:59 +04:00
6a46fe9825 chore(deps): update actions/checkout action to v6
All checks were successful
Test / cargo test (pull_request) Successful in 1m36s
Test / cargo test (push) Successful in 1m45s
RenovateBot / renovate (push) Successful in 1m50s
2026-01-30 14:16:24 +00:00
7818a7ef3f chore: update renovate workflow to include GITHUB_COM_TOKEN
All checks were successful
Test / cargo test (push) Successful in 1m43s
RenovateBot / renovate (push) Successful in 25s
2026-01-30 18:15:52 +04:00
15f2a73e95 chore: wire RENOVATE_LOG_LEVEL
All checks were successful
RenovateBot / renovate (push) Successful in 21s
Test / cargo test (push) Successful in 1m35s
2026-01-30 04:35:32 +04:00
2890b69678 migrate renovate config to gitea
All checks were successful
RenovateBot / renovate (push) Successful in 1m51s
Test / cargo test (push) Successful in 1m34s
2026-01-30 04:27:02 +04:00
27e9d2b39c Move CI to Gitea Actions
All checks were successful
Test / cargo test (push) Successful in 1m37s
2026-01-30 04:00:58 +04:00
b283e2a8df Update dependencies and fix clippy warnings
Some checks failed
Mirror / mirror (push) Failing after 7s
Test / cargo test (push) Successful in 1m39s
2026-01-30 03:29:08 +04:00
9dcce90201 chore: update dependencies and fix clippy warnings
Some checks failed
Mirror / mirror (push) Failing after 1m45s
Test / cargo test (push) Successful in 1m33s
- refresh Cargo.lock to latest compatible crates
- simplify u32->u64 conversion in libnres
- use is_multiple_of in unpacker list validation
2026-01-19 20:52:54 +04:00
renovate[bot]
7c876faf12 Update Rust crate console to v0.16.1 (#48)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-08 13:25:23 +00:00
renovate[bot]
39c66e698e Update Rust crate log to v0.4.28 (#47)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-04 05:03:11 +00:00
renovate[bot]
abac84a008 Update Rust crate image to v0.25.8 (#46)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-03 20:21:26 +00:00
renovate[bot]
b44217d4af Update Rust crate clap to v4.5.47 (#45)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-03 05:25:25 +00:00
renovate[bot]
c268e4c205 Update all digest updates (#41)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-27 12:23:15 +04:00
renovate[bot]
8aabe74eb2 Update Rust crate thiserror to v2.0.15 (#39)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-17 10:13:52 +00:00
84f2175fd2 Merge pull request #33 from valentineus/renovate/all-digest
Update all digest updates
2025-08-13 18:16:56 +04:00
renovate[bot]
307b9c6d90 Update all digest updates 2025-08-13 13:45:03 +00:00
renovate[bot]
7de26b16d4 Update Rust crate clap to v4.5.41 (#32)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-10 19:47:12 +00:00
52f2ad43e6 Merge pull request #29 from valentineus/renovate/all-digest
Update all digest updates
2025-07-09 03:23:23 +04:00
renovate[bot]
c4dec3fe4c Update all digest updates 2025-07-08 20:30:48 +00:00
e51edcb561 Update dependencies in Cargo.lock 2025-06-14 23:02:49 +00:00
2273fd4263 Merge pull request #7 from valentineus/nres
Обновление структуры проекта
2025-06-15 02:42:55 +04:00
renovate[bot]
d4f104cf5e Update Rust crate clap to v4.5.40 (#28)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-06-10 13:27:38 +00:00
renovate[bot]
7f41a51f2a Update all digest updates (#27)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-28 03:58:39 +00:00
renovate[bot]
e97610a8ac Update Rust crate clap to v4.5.38 (#26)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-11 06:45:14 +00:00
renovate[bot]
ee02d922ae Update Rust crate miette to v7.6.0 (#25)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-04-27 14:41:46 +00:00
renovate[bot]
dbd7b6bf33 Update all digest updates (#24)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-04-21 18:14:47 +00:00
renovate[bot]
949c0aa087 Update all digest updates (#14)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-04-21 09:37:22 +00:00
renovate[bot]
4f29af53b6 Update Rust crate console to v0.15.11 (#13)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-03-02 05:23:27 +00:00
renovate[bot]
1d62740d59 Update Rust crate clap to v4.5.31 (#12)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-02-24 22:22:28 +00:00
d274602104 Merge branch 'master' into nres 2025-02-23 17:23:33 +04:00
8bc39d10b1 Updated dependencies 2025-02-23 17:22:30 +04:00
88faa6e3ea Merge branch 'master' into nres 2025-02-22 14:19:02 +04:00
renovate[bot]
66705ba4f0 Update Rust crate log to v0.4.26 (#11)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-02-21 10:51:06 +00:00
renovate[bot]
bb4c217ee2 Update all digest updates (#10)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-02-20 12:35:42 +00:00
renovate[bot]
c83822e353 Update Rust crate clap to v4.5.30 (#9)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-02-18 03:03:44 +00:00
renovate[bot]
130ee8df5b Update Rust crate clap to v4.5.29 (#8)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-02-12 02:55:33 +00:00
8d8653133b Обновление структуры проекта 2025-02-08 01:11:02 +00:00
94d2f8a512 Обновление зависимостей 2025-02-08 00:44:59 +00:00
215a093344 Updated Renovate config 2025-02-05 03:43:58 +04:00
3de1575082 Merge pull request #5 from valentineus/renovate/all-digest 2025-02-04 05:49:24 +04:00
renovate[bot]
aa8e1184bf Update Rust crate clap to v4.5.28 2025-02-04 01:47:42 +00:00
feb7ebe722 Merge pull request #4 from valentineus/renovate/all-digest
Update Rust crate miette to v7.5.0
2025-02-01 17:58:58 +04:00
renovate[bot]
becadef5ee Update Rust crate miette to v7.5.0 2025-02-01 04:26:56 +00:00
a4b36e1aea Merge pull request #3 from valentineus/renovate/all-digest
Update all digest updates
2025-01-30 04:34:02 +04:00
renovate[bot]
c7b099b596 Update all digest updates 2025-01-30 00:26:30 +00:00
48a08445e7 Added mirror 2025-01-30 04:25:11 +04:00
694de5edfa Moved Renocate config 2025-01-30 01:59:31 +04:00
0dc37e9604 Outdated CI and Renovate configurations have been removed, and a new Dependabot configuration file for dependency management has been added. 2025-01-24 20:50:13 +04:00
3d2e970225 Update Rust crate clap to v4.5.27 2025-01-21 00:01:51 +00:00
d90b9830bc Updated all dependencies 2025-01-20 20:18:27 +00:00
f91e1bda22 Update Rust crate serde_json to v1.0.137 2025-01-20 00:02:56 +00:00
e9a0fd718f Update Rust crate log to v0.4.25 2025-01-15 00:03:21 +00:00
509ce2d83d Update all digest updates 2025-01-10 23:44:03 +00:00
391756b77d Update all digest updates 2025-01-10 21:04:54 +00:00
035153c7c0 Update all digest updates 2025-01-07 21:04:58 +00:00
885a593829 Update Rust crate serde to v1.0.217 2024-12-27 21:02:46 +00:00
7c3c8cc969 Update all digest updates 2024-12-21 21:03:04 +00:00
00c62a9909 Update Rust crate thiserror to v2.0.8 2024-12-18 21:03:04 +00:00
c2899d27af Update Rust crate console to v0.15.10 2024-12-16 15:42:52 +00:00
e60fdd1958 Update Rust crate thiserror to v2.0.7 2024-12-14 21:02:34 +00:00
dd6d440ba5 Update Rust crate serde to v1.0.216 2024-12-11 21:04:41 +00:00
36a082ba18 Update all digest updates 2024-12-08 21:03:54 +00:00
09689a937c Update all digest updates 2024-12-03 21:01:39 +00:00
39f6479415 Update Rust crate miette to v7.4.0 2024-11-27 21:02:44 +00:00
01a2a47370 Update Rust crate miette to v7.3.0 2024-11-26 21:05:22 +00:00
4cd42afa37 Update Rust crate serde_json to v1.0.133 2024-11-17 21:05:34 +00:00
298aa954b9 Update Rust crate clap to v4.5.21 2024-11-13 21:01:52 +00:00
910deb6c17 Update all digest updates 2024-11-12 21:01:58 +00:00
4a22e2177e Merge pull request 'Update Rust crate thiserror to v2' (!36) from renovate/thiserror-2.x into master
Reviewed-on: #36
2024-11-11 15:10:34 +03:00
729c972573 Update Rust crate thiserror to v2 2024-11-10 21:05:05 +00:00
250d78a955 Update Rust crate thiserror to v1.0.69 2024-11-10 21:04:56 +00:00
03f2d762bb Merge pull request 'Update ghcr.io/renovatebot/renovate Docker tag to v39' (!34) from renovate/ghcr.io-renovatebot-renovate-39.x into master
Reviewed-on: #34
2024-11-06 09:43:20 +03:00
fcaa729544 Update all digest updates 2024-11-05 21:02:13 +00:00
8c2a6e2c19 Update ghcr.io/renovatebot/renovate Docker tag to v39 2024-11-04 21:02:13 +00:00
daa2efba89 Update Rust crate thiserror to v1.0.66 2024-11-01 21:03:36 +00:00
b5748505ef Update Rust crate serde to v1.0.214 2024-10-28 21:02:55 +00:00
d305b1f005 Update all digest updates 2024-10-22 21:01:55 +00:00
2cfba4891c Update Rust crate serde_json to v1.0.132 2024-10-19 21:01:55 +00:00
777d3814d3 Update Rust crate serde_json to v1.0.131 2024-10-18 23:23:57 +00:00
784ceeebdf Update Rust crate serde_json to v1.0.130 2024-10-18 21:02:00 +00:00
e3675555ea Update all digest updates 2024-10-17 21:02:30 +00:00
91104e214f Update Rust crate image to v0.25.3 2024-10-16 21:04:18 +00:00
9198b18652 Update Rust crate clap to v4.5.20 2024-10-08 21:04:51 +00:00
1ad7949828 Update Rust crate clap to v4.5.19 2024-10-01 21:03:54 +00:00
b98f01a810 Update Rust crate thiserror to v1.0.64 2024-09-24 09:34:04 +00:00
fa88050a52 Update all digest updates 2024-09-23 21:04:33 +00:00
1123c8a56e Update all digest updates 2024-09-15 21:07:25 +00:00
2eb6333552 Update Rust crate serde to v1.0.209 2024-08-24 12:51:04 +00:00
c5224e006f Update Rust crate serde_json to v1.0.127 2024-08-23 21:04:54 +00:00
79599f3cf4 Update Rust crate clap to v4.5.16 2024-08-15 23:00:19 +00:00
7acf99b9d6 Update all digest updates 2024-08-15 21:03:43 +00:00
ec542703b4 Update Rust crate serde to v1.0.207 2024-08-12 21:02:14 +00:00
ee1cdda38b Update Rust crate serde_json to v1.0.124 2024-08-11 21:42:34 +00:00
293a1de413 Update all digest updates 2024-08-11 21:04:17 +00:00
6635d4da9a Update Rust crate clap to v4.5.15 2024-08-10 21:04:17 +00:00
f549769fcf Update all digest updates 2024-08-08 21:05:42 +00:00
c0a56acc0c Update Rust crate serde_json to v1.0.122 2024-08-02 21:05:34 +00:00
a136dc5fa4 Update Rust crate clap to v4.5.13 2024-07-31 22:13:23 +00:00
1b13f2acfc Update Rust crate clap to v4.5.12 2024-07-31 21:03:54 +00:00
6c127ce028 Update Rust crate serde_json to v1.0.121 2024-07-29 21:03:12 +00:00
bc2e051741 Merge branch 'master' into renovate/ghcr.io-renovatebot-renovate-38.x 2024-07-26 17:12:49 +03:00
9abd2a4558 Update ghcr.io/renovatebot/renovate Docker tag to v38 2024-07-25 21:03:45 +00:00
f267a56fd0 Update Rust crate clap to v4.5.11 2024-07-25 21:03:42 +00:00
1d592418af Update Rust crate clap to v4.5.10 2024-07-23 21:02:15 +00:00
3448f0f930 Update Rust crate image to v0.25.2 2024-07-21 21:04:37 +00:00
039ed238a6 Added Gitea CI testing 2024-07-19 18:23:35 +04:00
b7349f9df9 Added CI check 2024-07-19 13:08:47 +00:00
12c7f0284e Added DevContainer 2024-07-19 13:08:46 +00:00
5c9a691495 Update Rust crate miette to v7 2024-07-19 12:43:23 +00:00
bf8be5c045 Update all digest updates 2024-07-19 12:41:15 +00:00
ee8a5fc02b Added Gitea 2024-07-19 16:39:08 +04:00
a990de90fe Deleted vendor folder 2024-07-19 16:37:58 +04:00
3d48cd3f81 Initial MkDocs 2024-02-06 02:26:50 +04:00
78d6eca336 Initial GitHub Actions 2024-02-06 02:20:26 +04:00
7341 changed files with 1747 additions and 2162292 deletions

View File

@@ -1,5 +0,0 @@
[source.crates-io]
replace-with = "vendored-sources"
[source.vendored-sources]
directory = "vendor"

View File

@@ -0,0 +1,28 @@
name: RenovateBot
on:
schedule:
- cron: "@daily"
push:
branches:
- master
jobs:
renovate:
container: ghcr.io/renovatebot/renovate:43
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v6
- name: Run renovate
run: |
renovate
env:
GITHUB_COM_TOKEN: ${{ secrets.RENOVATE_GITHUB_TOKEN }}
LOG_LEVEL: ${{ vars.RENOVATE_LOG_LEVEL }}
RENOVATE_CONFIG_FILE: renovate.config.cjs
RENOVATE_LOG_LEVEL: ${{ vars.RENOVATE_LOG_LEVEL }}
RENOVATE_REPOSITORIES: ${{ gitea.repository }}
RENOVATE_TOKEN: ${{ secrets.RENOVATE_TOKEN }}

13
.gitea/workflows/test.yml Normal file
View File

@@ -0,0 +1,13 @@
name: Test
on: [push, pull_request]
jobs:
test:
name: cargo test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@stable
- run: cargo check --all
- run: cargo test --all-features

14
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,14 @@
version: 2
updates:
- package-ecosystem: "cargo"
directory: "/"
schedule:
interval: "weekly"
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
- package-ecosystem: "devcontainers"
directory: "/"
schedule:
interval: "weekly"

1
.gitignore vendored
View File

@@ -1 +0,0 @@
/target

30
.renovaterc Normal file
View File

@@ -0,0 +1,30 @@
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": [
"config:recommended",
":disableDependencyDashboard"
],
"assignees": [
"valentineus"
],
"labels": [
"dependencies",
"automated"
],
"packageRules": [
{
"groupName": "all digest updates",
"groupSlug": "all-digest",
"matchUpdateTypes": [
"minor",
"patch",
"pin",
"digest"
],
"matchPackageNames": [
"*"
],
"automerge": true
}
]
}

1182
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,14 +0,0 @@
[workspace]
resolver = "2"
members = [
"libnres",
"nres-cli",
"packer",
"texture-decoder",
"unpacker",
]
[profile.release]
codegen-units = 1
lto = true
strip = true

17
docs/index.md Normal file
View File

@@ -0,0 +1,17 @@
# Welcome to MkDocs
For full documentation visit [mkdocs.org](https://www.mkdocs.org).
## Commands
* `mkdocs new [dir-name]` - Create a new project.
* `mkdocs serve` - Start the live-reloading docs server.
* `mkdocs build` - Build the documentation site.
* `mkdocs -h` - Print help message and exit.
## Project layout
mkdocs.yml # The configuration file.
docs/
index.md # The documentation homepage.
... # Other markdown pages, images and other files.

View File

@@ -0,0 +1,426 @@
# FRES Декомпрессия
## Обзор
FRES — это гибридный алгоритм сжатия, использующий комбинацию RLE (Run-Length Encoding) и LZ77-подобного сжатия со скользящим окном. Существуют два режима работы: **adaptive Huffman** (флаг `a1 < 0`) и **простой битовый** (флаг `a1 >= 0`).
```c
char __stdcall sub_1001B22E(
char a1, // Флаг режима (< 0 = Huffman, >= 0 = простой)
int a2, // Ключ/seed (не используется напрямую)
_BYTE *a3, // Выходной буфер
int a4, // Размер выходного буфера
_BYTE *a5, // Входные сжатые данные
int a6 // Размер входных данных
)
```
## Структуры данных
### Глобальные переменные
```c
byte_1003A910[4096] // Циклический буфер скользящего окна (12 бит адрес)
dword_1003E09C // Указатель на конец выходного буфера
dword_1003E0A0 // Текущая позиция в циклическом буфере
dword_1003E098 // Состояние Huffman дерева
dword_1003E0A4 // Длина повтора для LZ77
```
### Константы
```c
#define WINDOW_SIZE 4096 // Размер скользящего окна (0x1000)
#define WINDOW_MASK 0x0FFF // Маска для циклического буфера
#define INIT_POS_NEG 4078 // Начальная позиция для Huffman режима
#define INIT_POS_POS 4036 // Начальная позиция для простого режима
```
## Режим 1: Простой битовый режим (a1 >= 0)
Это более простой режим без Huffman кодирования. Работает следующим образом:
### Алгоритм
```
Инициализация:
position = 4036
flags = 0
flagBits = 0
Цикл декомпрессии:
Пока есть входные данные и выходной буфер не заполнен:
1. Прочитать бит флага:
if (flagBits высокий бит == 0):
flags = *input++
flagBits = 127 (0x7F)
flag_bit = flags & 1
flags >>= 1
2. Прочитать второй бит:
if (flagBits низкий бит == 0):
загрузить новый байт флагов
second_bit = flags & 1
flags >>= 1
3. Выбор действия по битам:
a) Если оба бита == 0:
// Литерал - копировать один байт
byte = *input++
window[position] = byte
*output++ = byte
position = (position + 1) & 0xFFF
b) Если второй бит == 0 (первый == 1):
// LZ77 копирование
word = *(uint16*)input
input += 2
offset = (word >> 4) & 0xFFF // 12 бит offset
length = (word & 0xF) + 3 // 4 бита длины + 3
src_pos = offset
Повторить length раз:
byte = window[src_pos]
window[position] = byte
*output++ = byte
src_pos = (src_pos + 1) & 0xFFF
position = (position + 1) & 0xFFF
```
### Формат сжатых данных (простой режим)
```
Битовый поток:
[FLAG_BIT] [SECOND_BIT] [DATA]
Где:
FLAG_BIT = 0, SECOND_BIT = 0: → Литерал (1 байт следует)
FLAG_BIT = 1, SECOND_BIT = 0: → LZ77 копирование (2 байта следуют)
FLAG_BIT = любой, SECOND_BIT = 1: → Литерал (1 байт следует)
Формат LZ77 копирования (2 байта, little-endian):
Байт 0: offset_low (биты 0-7)
Байт 1: [length:4][offset_high:4]
offset = (byte1 >> 4) | (byte0 << 4) // 12 бит
length = (byte1 & 0x0F) + 3 // 4 бита + 3 = 3-18 байт
```
## Режим 2: Adaptive Huffman режим (a1 < 0)
Более сложный режим с динамическим Huffman деревом.
### Инициализация Huffman
```c
Инициализация таблиц:
1. Создание таблицы быстрого декодирования (dword_1003B94C[256])
2. Инициализация длин кодов (byte_1003BD4C[256])
3. Построение начального дерева (627 узлов)
```
### Алгоритм декодирования
```
Инициализация:
position = 4078
bit_buffer = 0
bit_count = 8
Инициализировать окно значением 0x20 (пробел):
for i in range(2039):
window[i] = 0x20
Цикл декомпрессии:
Пока не конец выходного буфера:
1. Декодировать символ через Huffman дерево:
tree_index = dword_1003E098 // начальный узел
Пока tree_index < 627: // внутренний узел
bit = прочитать_бит()
tree_index = tree[tree_index + bit]
symbol = tree_index - 627 // лист дерева
Обновить дерево (sub_1001B0AE)
2. Обработать символ:
if (symbol < 256):
// Литерал
window[position] = symbol
*output++ = symbol
position = (position + 1) & 0xFFF
else:
// LZ77 копирование
length = symbol - 253
// Прочитать offset (закодирован отдельно)
offset_bits = прочитать_биты(таблица длин)
offset = декодировать(offset_bits)
src_pos = (position - 1 - offset) & 0xFFF
Повторить length раз:
byte = window[src_pos]
window[position] = byte
*output++ = byte
src_pos = (src_pos + 1) & 0xFFF
position = (position + 1) & 0xFFF
```
### Обновление дерева
Адаптивное Huffman дерево обновляется после каждого декодированного символа:
```
Алгоритм обновления:
1. Увеличить счетчик частоты символа
2. Если частота превысила порог:
Перестроить узлы дерева (swapping)
3. Если счетчик достиг 0x8000:
Пересчитать все частоты (разделить на 2)
```
## Псевдокод полной реализации
### Декодер (простой режим)
```python
def fres_decompress_simple(input_data, output_size):
"""
FRES декомпрессия в простом режиме
"""
# Инициализация
window = bytearray(4096)
position = 4036
output = bytearray()
input_pos = 0
flags = 0
flag_bits_high = 0
flag_bits_low = 0
while len(output) < output_size and input_pos < len(input_data):
# Читаем флаг высокого бита
if (flag_bits_high & 1) == 0:
if input_pos >= len(input_data):
break
flags = input_data[input_pos]
input_pos += 1
flag_bits_high = 127 # 0x7F
flag_high = flag_bits_high & 1
flag_bits_high >>= 1
# Читаем флаг низкого бита
if input_pos >= len(input_data):
break
if (flag_bits_low & 1) == 0:
flags = input_data[input_pos]
input_pos += 1
flag_bits_low = 127
flag_low = flags & 1
flags >>= 1
# Обработка по флагам
if not flag_low: # Второй бит == 0
if not flag_high: # Оба бита == 0
# Литерал
if input_pos >= len(input_data):
break
byte = input_data[input_pos]
input_pos += 1
window[position] = byte
output.append(byte)
position = (position + 1) & 0xFFF
else: # Первый == 1, второй == 0
# LZ77 копирование
if input_pos + 1 >= len(input_data):
break
word = input_data[input_pos] | (input_data[input_pos + 1] << 8)
input_pos += 2
offset = (word >> 4) & 0xFFF
length = (word & 0xF) + 3
for _ in range(length):
if len(output) >= output_size:
break
byte = window[offset]
window[position] = byte
output.append(byte)
offset = (offset + 1) & 0xFFF
position = (position + 1) & 0xFFF
else: # Второй бит == 1
# Литерал
if input_pos >= len(input_data):
break
byte = input_data[input_pos]
input_pos += 1
window[position] = byte
output.append(byte)
position = (position + 1) & 0xFFF
return bytes(output[:output_size])
```
### Вспомогательные функции
```python
class BitReader:
"""Класс для побитового чтения"""
def __init__(self, data):
self.data = data
self.pos = 0
self.bit_buffer = 0
self.bits_available = 0
def read_bit(self):
"""Прочитать один бит"""
if self.bits_available == 0:
if self.pos >= len(self.data):
return 0
self.bit_buffer = self.data[self.pos]
self.pos += 1
self.bits_available = 8
bit = self.bit_buffer & 1
self.bit_buffer >>= 1
self.bits_available -= 1
return bit
def read_bits(self, count):
"""Прочитать несколько бит"""
result = 0
for i in range(count):
result |= self.read_bit() << i
return result
def initialize_window():
"""Инициализация окна для Huffman режима"""
window = bytearray(4096)
# Заполняем начальным значением
for i in range(4078):
window[i] = 0x20 # Пробел
return window
```
## Ключевые особенности
### 1. Циклический буфер
- Размер: 4096 байт (12 бит адресации)
- Маска: `0xFFF` для циклического доступа
- Начальная позиция зависит от режима
### 2. Dual-режимы
- **Простой**: Быстрее, меньше сжатие, для данных с низкой энтропией
- **Huffman**: Медленнее, лучше сжатие, для данных с высокой энтропией
### 3. LZ77 кодирование
- Offset: 12 бит (0-4095)
- Length: 4 бита + 3 (3-18 байт)
- Максимальное копирование: 18 байт
### 4. Битовые флаги
Используется двойная система флагов для определения типа следующих данных
## Проблемы реализации
### 1. Битовый порядок
Биты читаются справа налево (LSB first), что может вызвать путаницу
### 2. Huffman дерево
Адаптивное дерево требует точного отслеживания частот и правильной перестройки
### 3. Граничные условия
Необходимо тщательно проверять границы буферов
## Примеры данных
### Пример 1: Литералы (простой режим)
```
Входные биты: 00 00 00 ...
Выход: Последовательность литералов
Пример:
Flags: 0x00 (00000000)
Data: 0x41 ('A'), 0x42 ('B'), 0x43 ('C'), ...
Выход: "ABC..."
```
### Пример 2: LZ77 копирование
```
Входные биты: 10 ...
Выход: Копирование из окна
Пример:
Flags: 0x01 (00000001) - первый бит = 1
Word: 0x1234
Разбор:
offset = (0x34 << 4) | (0x12 >> 4) = 0x341
length = (0x12 & 0xF) + 3 = 5
Действие: Скопировать 5 байт с позиции offset
```
## Отладка
Для отладки рекомендуется:
```python
def debug_fres_decompress(input_data, output_size):
"""Версия с отладочным выводом"""
print(f"Input size: {len(input_data)}")
print(f"Output size: {output_size}")
# ... реализация с print на каждом шаге
print(f"Flag: {flag_high}{flag_low}")
if is_literal:
print(f" Literal: 0x{byte:02X}")
else:
print(f" LZ77: offset={offset}, length={length}")
```
## Заключение
FRES — это эффективный гибридный алгоритм, сочетающий:
- RLE для повторяющихся данных
- LZ77 для ссылок на предыдущие данные
- Опциональный Huffman для символов
**Сложность декомпрессии:** O(n) где n — размер выходных данных
**Размер окна:** 4 КБ — хороший баланс между памятью и степенью сжатия

View File

@@ -0,0 +1,598 @@
# Huffman Декомпрессия
## Обзор
Это реализация **DEFLATE-подобного** алгоритма декомпрессии, используемого в [NRes](overview.md). Алгоритм поддерживает три режима блоков и использует два Huffman дерева для кодирования литералов/длин и расстояний.
```c
int __thiscall sub_1001AF10(
unsigned int *this, // Контекст декодера (HuffmanContext)
int *a2 // Выходной параметр (результат операции)
)
```
## Структура контекста (HuffmanContext)
```c
struct HuffmanContext {
uint32_t bitBuffer[0x4000]; // 0x00000-0x0FFFF: Битовый буфер (32KB)
uint32_t compressedSize; // 0x10000: Размер сжатых данных
uint32_t unknown1; // 0x10004: Не используется
uint32_t outputPosition; // 0x10008: Позиция в выходном буфере
uint32_t currentByte; // 0x1000C: Текущий байт
uint8_t* sourceData; // 0x10010: Указатель на сжатые данные
uint8_t* destData; // 0x10014: Указатель на выходной буфер
uint32_t bitPosition; // 0x10018: Позиция бита
uint32_t inputPosition; // 0x1001C: Позиция чтения (this[16389])
uint32_t decodedBytes; // 0x10020: Декодированные байты (this[16386])
uint32_t bitBufferValue; // 0x10024: Значение бит буфера (this[16391])
uint32_t bitsAvailable; // 0x10028: Доступные биты (this[16392])
// ...
};
// Смещения в структуре:
#define CTX_OUTPUT_POS 16385 // this[16385]
#define CTX_DECODED_BYTES 16386 // this[16386]
#define CTX_SOURCE_PTR 16387 // this[16387]
#define CTX_DEST_PTR 16388 // this[16388]
#define CTX_INPUT_POS 16389 // this[16389]
#define CTX_BIT_BUFFER 16391 // this[16391]
#define CTX_BITS_COUNT 16392 // this[16392]
#define CTX_MAX_SYMBOL 16393 // this[16393]
```
## Три режима блоков
Алгоритм определяет тип блока по первым 3 битам:
```
Биты: [TYPE:2] [FINAL:1]
FINAL = 1: Это последний блок
TYPE:
00 = Несжатый блок (сырые данные)
01 = Сжатый с фиксированными Huffman кодами
10 = Сжатый с динамическими Huffman кодами
11 = Зарезервировано (ошибка)
```
### Основной цикл декодирования
```c
int decode_block(HuffmanContext* ctx) {
// Читаем первый бит (FINAL)
int final_bit = read_bit(ctx);
// Читаем 2 бита (TYPE)
int type = read_bits(ctx, 2);
switch (type) {
case 0: // 00 - Несжатый блок
return decode_uncompressed_block(ctx);
case 1: // 01 - Фиксированные Huffman коды
return decode_fixed_huffman_block(ctx);
case 2: // 10 - Динамические Huffman коды
return decode_dynamic_huffman_block(ctx);
case 3: // 11 - Ошибка
return 2; // Неподдерживаемый тип
}
return final_bit ? 0 : 1; // 0 = конец, 1 = есть еще блоки
}
```
## Режим 0: Несжатый блок
Простое копирование байтов без сжатия.
### Алгоритм
```python
def decode_uncompressed_block(ctx):
"""
Формат несжатого блока:
[LEN:16][NLEN:16][DATA:LEN]
Где:
LEN - длина данных (little-endian)
NLEN - инверсия LEN (~LEN)
DATA - сырые данные
"""
# Выравнивание к границе байта
bits_to_skip = ctx.bits_available & 7
ctx.bit_buffer >>= bits_to_skip
ctx.bits_available -= bits_to_skip
# Читаем длину (16 бит)
length = read_bits(ctx, 16)
# Читаем инверсию длины (16 бит)
nlength = read_bits(ctx, 16)
# Проверка целостности
if length != (~nlength & 0xFFFF):
return 1 # Ошибка
# Копируем данные
for i in range(length):
byte = read_byte(ctx)
write_output_byte(ctx, byte)
# Проверка переполнения выходного буфера
if ctx.output_position >= 0x8000:
flush_output_buffer(ctx)
return 0
```
### Детали
- Данные копируются "как есть"
- Используется для несжимаемых данных
- Требует выравнивания по байтам перед чтением длины
## Режим 1: Фиксированные Huffman коды
Использует предопределенные Huffman таблицы.
### Фиксированные таблицы длин кодов
```python
# Таблица для литералов/длин (288 символов)
FIXED_LITERAL_LENGTHS = [
8, 8, 8, 8, ..., 8, # 0-143: коды длины 8 (144 символа)
9, 9, 9, 9, ..., 9, # 144-255: коды длины 9 (112 символов)
7, 7, 7, 7, ..., 7, # 256-279: коды длины 7 (24 символа)
8, 8, 8, 8, ..., 8 # 280-287: коды длины 8 (8 символов)
]
# Таблица для расстояний (30 символов)
FIXED_DISTANCE_LENGTHS = [
5, 5, 5, 5, ..., 5 # 0-29: все коды длины 5
]
```
### Алгоритм декодирования
```python
def decode_fixed_huffman_block(ctx):
"""Декодирование блока с фиксированными Huffman кодами"""
# Инициализация фиксированных таблиц
lit_tree = build_huffman_tree(FIXED_LITERAL_LENGTHS)
dist_tree = build_huffman_tree(FIXED_DISTANCE_LENGTHS)
while True:
# Декодировать символ литерала/длины
symbol = decode_huffman_symbol(ctx, lit_tree)
if symbol < 256:
# Литерал - просто вывести байт
write_output_byte(ctx, symbol)
elif symbol == 256:
# Конец блока
break
else:
# Символ длины (257-285)
length = decode_length(ctx, symbol)
# Декодировать расстояние
dist_symbol = decode_huffman_symbol(ctx, dist_tree)
distance = decode_distance(ctx, dist_symbol)
# Скопировать из истории
copy_from_history(ctx, distance, length)
```
### Таблицы экстра-бит
```python
# Дополнительные биты для длины
LENGTH_EXTRA_BITS = {
257: 0, 258: 0, 259: 0, 260: 0, 261: 0, 262: 0, 263: 0, 264: 0, # 3-10
265: 1, 266: 1, 267: 1, 268: 1, # 11-18
269: 2, 270: 2, 271: 2, 272: 2, # 19-34
273: 3, 274: 3, 275: 3, 276: 3, # 35-66
277: 4, 278: 4, 279: 4, 280: 4, # 67-130
281: 5, 282: 5, 283: 5, 284: 5, # 131-257
285: 0 # 258
}
LENGTH_BASE = {
257: 3, 258: 4, 259: 5, ..., 285: 258
}
# Дополнительные биты для расстояния
DISTANCE_EXTRA_BITS = {
0: 0, 1: 0, 2: 0, 3: 0, # 1-4
4: 1, 5: 1, 6: 2, 7: 2, # 5-12
8: 3, 9: 3, 10: 4, 11: 4, # 13-48
12: 5, 13: 5, 14: 6, 15: 6, # 49-192
16: 7, 17: 7, 18: 8, 19: 8, # 193-768
20: 9, 21: 9, 22: 10, 23: 10, # 769-3072
24: 11, 25: 11, 26: 12, 27: 12, # 3073-12288
28: 13, 29: 13 # 12289-24576
}
DISTANCE_BASE = {
0: 1, 1: 2, 2: 3, 3: 4, ..., 29: 24577
}
```
### Декодирование длины и расстояния
```python
def decode_length(ctx, symbol):
"""Декодировать длину из символа"""
base = LENGTH_BASE[symbol]
extra_bits = LENGTH_EXTRA_BITS[symbol]
if extra_bits > 0:
extra = read_bits(ctx, extra_bits)
return base + extra
return base
def decode_distance(ctx, symbol):
"""Декодировать расстояние из символа"""
base = DISTANCE_BASE[symbol]
extra_bits = DISTANCE_EXTRA_BITS[symbol]
if extra_bits > 0:
extra = read_bits(ctx, extra_bits)
return base + extra
return base
```
## Режим 2: Динамические Huffman коды
Самый сложный режим. Huffman таблицы передаются в начале блока.
### Формат заголовка динамического блока
```
Биты заголовка:
[HLIT:5] - Количество литерал/длина кодов - 257 (значение: 257-286)
[HDIST:5] - Количество расстояние кодов - 1 (значение: 1-30)
[HCLEN:4] - Количество длин кодов для code length алфавита - 4 (значение: 4-19)
Далее идут длины кодов для code length алфавита:
[CL0:3] [CL1:3] ... [CL(HCLEN-1):3]
Порядок code length кодов:
16, 17, 18, 0, 8, 7, 9, 6, 10, 5, 11, 4, 12, 3, 13, 2, 14, 1, 15
```
### Алгоритм декодирования
```python
def decode_dynamic_huffman_block(ctx):
"""Декодирование блока с динамическими Huffman кодами"""
# 1. Читаем заголовок
hlit = read_bits(ctx, 5) + 257 # Количество литерал/длина кодов
hdist = read_bits(ctx, 5) + 1 # Количество расстояние кодов
hclen = read_bits(ctx, 4) + 4 # Количество code length кодов
if hlit > 286 or hdist > 30:
return 1 # Ошибка
# 2. Читаем длины для code length алфавита
CODE_LENGTH_ORDER = [16, 17, 18, 0, 8, 7, 9, 6, 10, 5,
11, 4, 12, 3, 13, 2, 14, 1, 15]
code_length_lengths = [0] * 19
for i in range(hclen):
code_length_lengths[CODE_LENGTH_ORDER[i]] = read_bits(ctx, 3)
# 3. Строим дерево для code length
cl_tree = build_huffman_tree(code_length_lengths)
# 4. Декодируем длины литерал/длина и расстояние кодов
lengths = decode_code_lengths(ctx, cl_tree, hlit + hdist)
# 5. Разделяем на два алфавита
literal_lengths = lengths[:hlit]
distance_lengths = lengths[hlit:]
# 6. Строим деревья для декодирования
lit_tree = build_huffman_tree(literal_lengths)
dist_tree = build_huffman_tree(distance_lengths)
# 7. Декодируем данные (аналогично фиксированному режиму)
return decode_huffman_data(ctx, lit_tree, dist_tree)
```
### Декодирование длин кодов
Используется специальный алфавит с тремя специальными символами:
```python
def decode_code_lengths(ctx, cl_tree, total_count):
"""
Декодирование последовательности длин кодов
Специальные символы:
16 - Повторить предыдущую длину 3-6 раз (2 доп. бита)
17 - Повторить 0 длину 3-10 раз (3 доп. бита)
18 - Повторить 0 длину 11-138 раз (7 доп. бит)
"""
lengths = []
last_length = 0
while len(lengths) < total_count:
symbol = decode_huffman_symbol(ctx, cl_tree)
if symbol < 16:
# Обычная длина (0-15)
lengths.append(symbol)
last_length = symbol
elif symbol == 16:
# Повторить предыдущую длину
repeat = read_bits(ctx, 2) + 3
lengths.extend([last_length] * repeat)
elif symbol == 17:
# Повторить ноль (короткий)
repeat = read_bits(ctx, 3) + 3
lengths.extend([0] * repeat)
last_length = 0
elif symbol == 18:
# Повторить ноль (длинный)
repeat = read_bits(ctx, 7) + 11
lengths.extend([0] * repeat)
last_length = 0
return lengths[:total_count]
```
## Построение Huffman дерева
```python
def build_huffman_tree(code_lengths):
"""
Построить Huffman дерево из длин кодов
Использует алгоритм "canonical Huffman codes"
"""
max_length = max(code_lengths) if code_lengths else 0
# 1. Подсчитать количество кодов каждой длины
bl_count = [0] * (max_length + 1)
for length in code_lengths:
if length > 0:
bl_count[length] += 1
# 2. Вычислить первый код для каждой длины
code = 0
next_code = [0] * (max_length + 1)
for bits in range(1, max_length + 1):
code = (code + bl_count[bits - 1]) << 1
next_code[bits] = code
# 3. Присвоить числовые коды символам
tree = {}
for symbol, length in enumerate(code_lengths):
if length > 0:
tree[symbol] = {
'code': next_code[length],
'length': length
}
next_code[length] += 1
# 4. Создать структуру быстрого поиска
lookup_table = create_lookup_table(tree)
return lookup_table
def decode_huffman_symbol(ctx, tree):
"""Декодировать один символ из Huffman дерева"""
code = 0
length = 0
for length in range(1, 16):
bit = read_bit(ctx)
code = (code << 1) | bit
# Проверить в таблице быстрого поиска
if (code, length) in tree:
return tree[(code, length)]
return -1 # Ошибка декодирования
```
## Управление выходным буфером
```python
def write_output_byte(ctx, byte):
"""Записать байт в выходной буфер"""
# Записываем в bitBuffer (используется как циклический буфер)
ctx.bitBuffer[ctx.decodedBytes] = byte
ctx.decodedBytes += 1
# Если буфер заполнен (32KB)
if ctx.decodedBytes >= 0x8000:
flush_output_buffer(ctx)
def flush_output_buffer(ctx):
"""Сбросить выходной буфер в финальный выход"""
# Копируем данные в финальный выходной буфер
dest_offset = ctx.outputPosition + ctx.destData
memcpy(dest_offset, ctx.bitBuffer, ctx.decodedBytes)
# Обновляем счетчики
ctx.outputPosition += ctx.decodedBytes
ctx.decodedBytes = 0
def copy_from_history(ctx, distance, length):
"""Скопировать данные из истории (LZ77)"""
# Позиция источника в циклическом буфере
src_pos = (ctx.decodedBytes - distance) & 0x7FFF
for i in range(length):
byte = ctx.bitBuffer[src_pos]
write_output_byte(ctx, byte)
src_pos = (src_pos + 1) & 0x7FFF
```
## Полная реализация на Python
```python
class HuffmanDecoder:
"""Полный DEFLATE-подобный декодер"""
def __init__(self, input_data, output_size):
self.input_data = input_data
self.output_size = output_size
self.input_pos = 0
self.bit_buffer = 0
self.bits_available = 0
self.output = bytearray()
self.history = bytearray(32768) # 32KB циклический буфер
self.history_pos = 0
def read_bit(self):
"""Прочитать один бит"""
if self.bits_available == 0:
if self.input_pos >= len(self.input_data):
return 0
self.bit_buffer = self.input_data[self.input_pos]
self.input_pos += 1
self.bits_available = 8
bit = self.bit_buffer & 1
self.bit_buffer >>= 1
self.bits_available -= 1
return bit
def read_bits(self, count):
"""Прочитать несколько бит (LSB first)"""
result = 0
for i in range(count):
result |= self.read_bit() << i
return result
def write_byte(self, byte):
"""Записать байт в выход и историю"""
self.output.append(byte)
self.history[self.history_pos] = byte
self.history_pos = (self.history_pos + 1) & 0x7FFF
def copy_from_history(self, distance, length):
"""Скопировать из истории"""
src_pos = (self.history_pos - distance) & 0x7FFF
for _ in range(length):
byte = self.history[src_pos]
self.write_byte(byte)
src_pos = (src_pos + 1) & 0x7FFF
def decompress(self):
"""Основной цикл декомпрессии"""
while len(self.output) < self.output_size:
# Читаем заголовок блока
final = self.read_bit()
block_type = self.read_bits(2)
if block_type == 0:
# Несжатый блок
if not self.decode_uncompressed_block():
break
elif block_type == 1:
# Фиксированные Huffman коды
if not self.decode_fixed_huffman_block():
break
elif block_type == 2:
# Динамические Huffman коды
if not self.decode_dynamic_huffman_block():
break
else:
# Ошибка
raise ValueError("Invalid block type")
if final:
break
return bytes(self.output[:self.output_size])
# ... реализации decode_*_block методов ...
```
## Оптимизации
### 1. Таблица быстрого поиска
```python
# Предвычисленная таблица для 9 бит (первый уровень)
FAST_LOOKUP_BITS = 9
fast_table = [None] * (1 << FAST_LOOKUP_BITS)
# Заполнение таблицы при построении дерева
for symbol, info in tree.items():
if info['length'] <= FAST_LOOKUP_BITS:
# Все возможные префиксы для этого кода
code = info['code']
for i in range(1 << (FAST_LOOKUP_BITS - info['length'])):
lookup_code = code | (i << info['length'])
fast_table[lookup_code] = symbol
```
### 2. Буферизация битов
```python
# Читать по 32 бита за раз вместо побитового чтения
def refill_bits(self):
"""Пополнить битовый буфер"""
while self.bits_available < 24 and self.input_pos < len(self.input_data):
byte = self.input_data[self.input_pos]
self.input_pos += 1
self.bit_buffer |= byte << self.bits_available
self.bits_available += 8
```
## Отладка и тестирование
```python
def debug_huffman_decode(data):
"""Декодирование с отладочной информацией"""
decoder = HuffmanDecoder(data, len(data) * 10)
original_read_bits = decoder.read_bits
def debug_read_bits(count):
result = original_read_bits(count)
print(f"Read {count} bits: 0x{result:0{count//4}X} ({result})")
return result
decoder.read_bits = debug_read_bits
return decoder.decompress()
```
## Заключение
Этот Huffman декодер реализует **DEFLATE**-совместимый алгоритм с тремя режимами блоков:
1. **Несжатый** - для несжимаемых данных
2. **Фиксированный Huffman** - быстрое декодирование с предопределенными таблицами
3. **Динамический Huffman** - максимальное сжатие с пользовательскими таблицами
**Ключевые особенности:**
- Поддержка LZ77 для повторяющихся последовательностей
- Канонические Huffman коды для эффективного декодирования
- Циклический буфер 32KB для истории
- Оптимизации через таблицы быстрого поиска
**Сложность:** O(n) где n - размер выходных данных

View File

@@ -0,0 +1,578 @@
# Документация по формату NRes
## Обзор
NRes — это формат контейнера ресурсов, используемый в игровом движке Nikita. Файл представляет собой архив, содержащий несколько упакованных файлов с метаданными и поддержкой различных методов сжатия.
## Структура файла NRes
### 1. Заголовок файла (16 байт)
```c
struct NResHeader {
uint32_t signature; // +0x00: Сигнатура "NRes" (0x7365526E в little-endian)
uint32_t version; // +0x04: Версия формата (0x00000100 = версия 1.0)
uint32_t fileCount; // +0x08: Количество файлов в архиве
uint32_t fileSize; // +0x0C: Общий размер файла в байтах
};
```
**Детали:**
- `signature`: Константа `0x7365526E` (1936020046 в десятичном виде). Это ASCII строка "nRes" в обратном порядке байт
- `version`: Всегда должна быть `0x00000100` (256 в десятичном виде) для версии 1.0
- `fileCount`: Общее количество файлов в архиве (используется для валидации)
- `fileSize`: Полный размер NRes файла, включая заголовок
### 2. Данные файлов
Сразу после заголовка (с offset 0x10) начинаются данные упакованных файлов. Они хранятся последовательно, один за другим. Точное расположение каждого файла определяется записью в каталоге (см. раздел 3).
**⚠️ ВАЖНО: Выравнивание данных**
Данные каждого файла **выравниваются по границе 8 байт**. После записи данных файла добавляется padding (нулевые байты) до ближайшего кратного 8 адреса.
**Формула выравнивания:**
```
aligned_size = (packed_size + 7) & ~7
padding_bytes = aligned_size - packed_size
```
**Пример:**
- Файл размером 100 байт → padding 4 байта (до 104)
- Файл размером 104 байт → padding 0 байт (уже выровнен)
- Файл размером 105 байт → padding 3 байта (до 108)
Это означает, что:
1. `dataOffset` следующего файла всегда кратен 8
2. Между данными файлов могут быть 0-7 байт нулевого padding
3. При чтении нужно использовать `packedSize`, а не выравнивать вручную
### 3. Каталог файлов (Directory)
Каталог находится в **конце файла**. Его расположение вычисляется по формуле:
```
DirectoryOffset = FileSize - (FileCount * 64)
```
Каждая запись в каталоге имеет **фиксированный размер 64 байта (0x40)**:
```c
struct NResFileEntry {
char name[16]; // +0x00: Имя файла (NULL-terminated, uppercase)
uint32_t crc32; // +0x10: CRC32 хеш упакованных данных
uint32_t packMethod; // +0x14: Флаги метода упаковки и опции
uint32_t unpackedSize; // +0x18: Размер файла после распаковки
uint32_t packedSize; // +0x1C: Размер упакованных данных
uint32_t dataOffset; // +0x20: Смещение данных от начала файла
uint32_t fastDataPtr; // +0x24: Указатель для быстрого доступа (в памяти)
uint32_t xorSize; // +0x28: Размер данных для XOR-шифрования
uint32_t sortIndex; // +0x2C: Индекс для сортировки по имени
uint32_t reserved[4]; // +0x30: Зарезервировано (обычно нули)
};
```
## Подробное описание полей каталога
### Поле: name (смещение +0x00, 16 байт)
- **Назначение**: Имя файла в архиве
- **Формат**: NULL-terminated строка, максимум 15 символов + NULL
- **Особенности**:
- Все символы хранятся в **UPPERCASE** (заглавными буквами)
- При поиске файлов используется регистронезависимое сравнение (`_strcmpi`)
- Если имя короче 16 байт, остаток заполняется нулями
### Поле: crc32 (смещение +0x10, 4 байта)
- **Назначение**: Контрольная сумма CRC32 упакованных данных
- **Использование**: Проверка целостности данных при чтении
### Поле: packMethod (смещение +0x14, 4 байта)
**Критически важное поле!** Содержит битовые флаги, определяющие метод обработки данных:
```c
// Маски для извлечения метода упаковки
#define PACK_METHOD_MASK 0x1E0 // Биты 5-8 (основной метод)
#define PACK_METHOD_MASK2 0x1C0 // Биты 6-7 (альтернативная маска)
// Методы упаковки (биты 5-8)
#define PACK_NONE 0x000 // Нет упаковки (копирование)
#define PACK_XOR 0x020 // XOR-шифрование
#define PACK_FRES 0x040 // FRES компрессия (устаревшая)
#define PACK_FRES_XOR 0x060 // FRES + XOR (два прохода)
#define PACK_ZLIB 0x080 // Zlib сжатие (устаревшее)
#define PACK_ZLIB_XOR 0x0A0 // Zlib + XOR (два прохода)
#define PACK_HUFFMAN 0x0E0 // Huffman кодирование (основной метод)
// Дополнительные флаги
#define FLAG_ENCRYPTED 0x040 // Файл зашифрован/требует декодирования
```
**Алгоритм определения метода:**
1. Извлечь биты `packMethod & 0x1E0`
2. Проверить конкретные значения:
- `0x000`: Данные не сжаты, простое копирование
- `0x020`: XOR-шифрование с двухбайтовым ключом
- `0x040` или `0x060`: FRES компрессия (может быть + XOR)
- `0x080` или `0x0A0`: Zlib компрессия (может быть + XOR)
- `0x0E0`: Huffman кодирование (наиболее распространенный)
### Поле: unpackedSize (смещение +0x18, 4 байта)
- **Назначение**: Размер файла после полной распаковки
- **Использование**:
- Для выделения памяти под распакованные данные
- Для проверки корректности распаковки
### Поле: packedSize (смещение +0x1C, 4 байта)
- **Назначение**: Размер сжатых данных в архиве
- **Особенности**:
- Если `packedSize == 0`, файл пустой или является указателем
- Для несжатых файлов: `packedSize == unpackedSize`
### Поле: dataOffset (смещение +0x20, 4 байта)
- **Назначение**: Абсолютное смещение данных файла от начала NRes файла
- **Формула вычисления**: `BaseAddress + dataOffset = начало данных`
- **Диапазон**: Обычно от 0x10 (после заголовка) до начала каталога
### Поле: fastDataPtr (смещение +0x24, 4 байта)
- **Назначение**: Указатель на данные в памяти для быстрого доступа
- **Использование**: Только во время выполнения (runtime)
- **В файле**: Обычно равно 0 или содержит относительный offset
- **Особенность**: Используется функцией `rsLoadFast()` для файлов без упаковки
### Поле: xorSize (смещение +0x28, 4 байта)
- **Назначение**: Размер данных для XOR-шифрования при комбинированных методах
- **Использование**:
- Когда `packMethod & 0x60 == 0x60` (FRES + XOR)
- Сначала применяется XOR к этому количеству байт, затем FRES к результату
- **Значение**: Может отличаться от `packedSize` при многоэтапной упаковке
### Поле: sortIndex (смещение +0x2C, 4 байта)
- **Назначение**: Индекс для быстрого поиска по отсортированному каталогу
- **Использование**:
- Каталог сортируется по алфавиту (имени файлов)
- `sortIndex` хранит оригинальный порядковый номер файла
- Позволяет использовать бинарный поиск для функции `rsFind()`
### Поле: reserved (смещение +0x30, 16 байт)
- **Назначение**: Зарезервировано для будущих расширений
- **В файле**: Обычно заполнено нулями
- **Может содержать**: Дополнительные метаданные в новых версиях формата
## Алгоритмы упаковки
### 1. Без упаковки (PACK_NONE = 0x000)
```
Простое копирование данных:
memcpy(destination, source, packedSize);
```
### 2. XOR-шифрование (PACK_XOR = 0x020)
```c
// Ключ берется из поля crc32
uint16_t key = (uint16_t)(crc32 & 0xFFFF);
for (int i = 0; i < packedSize; i++) {
uint8_t byte = source[i];
destination[i] = byte ^ (key >> 8) ^ (key << 1);
// Обновление ключа
uint8_t newByte = (key >> 8) ^ (key << 1);
key = (newByte ^ ((key >> 8) >> 1)) | (newByte << 8);
}
```
**Ключевые особенности:**
- Используется 16-битный ключ из младших байт CRC32
- Ключ изменяется после каждого байта по специальному алгоритму
- Операции: XOR с старшим байтом ключа и со сдвинутым значением
### 3. [FRES компрессия](fres_decompression.md) (PACK_FRES = 0x040, 0x060)
Алгоритм FRES — это RLE-подобное сжатие с особой кодировкой повторов:
```
sub_1001B22E() - функция декомпрессии FRES
- Читает управляющие байты
- Декодирует литералы и повторы
- Использует скользящее окно для ссылок
```
### 4. [Huffman кодирование](huffman_decompression.md) (PACK_HUFFMAN = 0x0E0)
Наиболее сложный и эффективный метод:
```c
// Структура декодера
struct HuffmanDecoder {
uint32_t bitBuffer[0x4000]; // Буфер для битов
uint32_t compressedSize; // Размер сжатых данных
uint32_t outputPosition; // Текущая позиция в выходном буфере
uint32_t inputPosition; // Позиция в входных данных
uint8_t* sourceData; // Указатель на сжатые данные
uint8_t* destData; // Указатель на выходной буфер
uint32_t bitPosition; // Позиция бита в буфере
// ... дополнительные поля
};
```
**Процесс декодирования:**
1. Инициализация структуры декодера
2. Чтение битов и построение дерева Huffman
3. Декодирование символов по дереву
4. Запись в выходной буфер
## Высокоуровневая инструкция по реализации
### Этап 1: Открытие файла
```python
def open_nres_file(filepath):
with open(filepath, 'rb') as f:
# 1. Читаем заголовок (16 байт)
header_data = f.read(16)
signature, version, file_count, file_size = struct.unpack('<4I', header_data)
# 2. Проверяем сигнатуру
if signature != 0x7365526E: # "nRes"
raise ValueError("Неверная сигнатура файла")
# 3. Проверяем версию
if version != 0x100:
raise ValueError(f"Неподдерживаемая версия: {version}")
# 4. Вычисляем расположение каталога
directory_offset = file_size - (file_count * 64)
# 5. Читаем весь файл в память (или используем memory mapping)
f.seek(0)
file_data = f.read()
return {
'file_count': file_count,
'file_size': file_size,
'directory_offset': directory_offset,
'data': file_data
}
```
### Этап 2: Чтение каталога
```python
def read_directory(nres_file):
data = nres_file['data']
offset = nres_file['directory_offset']
file_count = nres_file['file_count']
entries = []
for i in range(file_count):
entry_offset = offset + (i * 64)
entry_data = data[entry_offset:entry_offset + 64]
# Парсим 64-байтовую запись
name = entry_data[0:16].decode('ascii').rstrip('\x00')
crc32, pack_method, unpacked_size, packed_size, data_offset, \
fast_ptr, xor_size, sort_index = struct.unpack('<8I', entry_data[16:48])
entry = {
'name': name,
'crc32': crc32,
'pack_method': pack_method,
'unpacked_size': unpacked_size,
'packed_size': packed_size,
'data_offset': data_offset,
'fast_data_ptr': fast_ptr,
'xor_size': xor_size,
'sort_index': sort_index
}
entries.append(entry)
return entries
```
### Этап 3: Поиск файла по имени
```python
def find_file(entries, filename):
# Имена в архиве хранятся в UPPERCASE
search_name = filename.upper()
# Используем бинарный поиск, так как каталог отсортирован
# Сортировка по sort_index восстанавливает алфавитный порядок
sorted_entries = sorted(entries, key=lambda e: e['sort_index'])
left, right = 0, len(sorted_entries) - 1
while left <= right:
mid = (left + right) // 2
mid_name = sorted_entries[mid]['name']
if mid_name == search_name:
return sorted_entries[mid]
elif mid_name < search_name:
left = mid + 1
else:
right = mid - 1
return None
```
### Этап 4: Извлечение данных файла
```python
def extract_file(nres_file, entry):
data = nres_file['data']
# 1. Получаем упакованные данные
packed_data = data[entry['data_offset']:
entry['data_offset'] + entry['packed_size']]
# 2. Определяем метод упаковки
pack_method = entry['pack_method'] & 0x1E0
# 3. Распаковываем в зависимости от метода
if pack_method == 0x000:
# Без упаковки
return unpack_none(packed_data)
elif pack_method == 0x020:
# XOR-шифрование
return unpack_xor(packed_data, entry['crc32'], entry['unpacked_size'])
elif pack_method == 0x040 or pack_method == 0x060:
# FRES компрессия (может быть с XOR)
if pack_method == 0x060:
# Сначала XOR
temp_data = unpack_xor(packed_data, entry['crc32'], entry['xor_size'])
return unpack_fres(temp_data, entry['unpacked_size'])
else:
return unpack_fres(packed_data, entry['unpacked_size'])
elif pack_method == 0x0E0:
# Huffman кодирование
return unpack_huffman(packed_data, entry['unpacked_size'])
else:
raise ValueError(f"Неподдерживаемый метод упаковки: 0x{pack_method:X}")
```
### Этап 5: Реализация алгоритмов распаковки
```python
def unpack_none(data):
"""Без упаковки - просто возвращаем данные"""
return data
def unpack_xor(data, crc32, size):
"""XOR-дешифрование с изменяющимся ключом"""
result = bytearray(size)
key = crc32 & 0xFFFF # Берем младшие 16 бит
for i in range(min(size, len(data))):
byte = data[i]
# XOR операция
high_byte = (key >> 8) & 0xFF
shifted = (key << 1) & 0xFFFF
result[i] = byte ^ high_byte ^ (shifted & 0xFF)
# Обновление ключа
new_byte = high_byte ^ (key << 1)
key = (new_byte ^ (high_byte >> 1)) | ((new_byte & 0xFF) << 8)
key &= 0xFFFF
return bytes(result)
def unpack_fres(data, unpacked_size):
"""
FRES декомпрессия - гибридный RLE+LZ77 алгоритм
Полная реализация в nres_decompression.py (класс FRESDecoder)
"""
from nres_decompression import FRESDecoder
decoder = FRESDecoder()
return decoder.decompress(data, unpacked_size)
def unpack_huffman(data, unpacked_size):
"""
Huffman декодирование (DEFLATE-подобный)
Полная реализация в nres_decompression.py (класс HuffmanDecoder)
"""
from nres_decompression import HuffmanDecoder
decoder = HuffmanDecoder()
return decoder.decompress(data, unpacked_size)
```
### Этап 6: Извлечение всех файлов
```python
def extract_all(nres_filepath, output_dir):
import os
# 1. Открываем NRes файл
nres_file = open_nres_file(nres_filepath)
# 2. Читаем каталог
entries = read_directory(nres_file)
# 3. Создаем выходную директорию
os.makedirs(output_dir, exist_ok=True)
# 4. Извлекаем каждый файл
for entry in entries:
print(f"Извлечение: {entry['name']}")
try:
# Извлекаем данные
unpacked_data = extract_file(nres_file, entry)
# Сохраняем в файл
output_path = os.path.join(output_dir, entry['name'])
with open(output_path, 'wb') as f:
f.write(unpacked_data)
print(f" ✓ Успешно ({len(unpacked_data)} байт)")
except Exception as e:
print(f" ✗ Ошибка: {e}")
```
## Особенности и важные замечания
### 1. Порядок байт (Endianness)
- **Все многобайтовые значения хранятся в Little-Endian порядке**
- При чтении используйте `struct.unpack('<...')`
### 2. Сортировка каталога
- Каталог файлов **отсортирован по имени файла** (алфавитный порядок)
- Поле `sortIndex` хранит оригинальный индекс до сортировки
- Это позволяет использовать бинарный поиск
### 3. Регистр символов
- Все имена файлов конвертируются в **UPPERCASE** (заглавные буквы)
- При поиске используйте регистронезависимое сравнение
### 4. Memory Mapping
- Оригинальный код использует `MapViewOfFile` для эффективной работы с большими файлами
- Рекомендуется использовать memory-mapped файлы для больших архивов
### 5. Валидация данных
- **Всегда проверяйте сигнатуру** перед обработкой
- **Проверяйте версию** формата
- **Проверяйте CRC32** после распаковки
- **Проверяйте размеры** (unpacked_size должен совпадать с результатом)
### 6. Обработка ошибок
- Файл может быть поврежден
- Метод упаковки может быть неподдерживаемым
- Данные могут быть частично зашифрованы
### 7. Производительность
- Для несжатых файлов (`packMethod & 0x1E0 == 0`) можно использовать прямое чтение
- Поле `fastDataPtr` может содержать кешированный указатель
- Используйте буферизацию при последовательном чтении
### 8. Выравнивание данных
- **Все данные файлов выравниваются по 8 байт**
- После каждого файла может быть 0-7 байт нулевого padding
- `dataOffset` следующего файла всегда кратен 8
- При чтении используйте `packedSize` из записи, не вычисляйте выравнивание
- При создании архива добавляйте padding: `padding = ((size + 7) & ~7) - size`
## Пример использования
```python
# Открыть архив
nres = open_nres_file("resources.nres")
# Прочитать каталог
entries = read_directory(nres)
# Вывести список файлов
for entry in entries:
print(f"{entry['name']:20s} - {entry['unpacked_size']:8d} байт")
# Найти конкретный файл
entry = find_file(entries, "texture.bmp")
if entry:
data = extract_file(nres, entry)
with open("extracted_texture.bmp", "wb") as f:
f.write(data)
# Извлечь все файлы
extract_all("resources.nres", "./extracted/")
```
## Дополнительные функции
### Проверка формата файла
```python
def is_nres_file(filepath):
try:
with open(filepath, 'rb') as f:
signature = struct.unpack('<I', f.read(4))[0]
return signature == 0x7365526E
except:
return False
```
### Получение информации о файле
```python
def get_file_info(entry):
pack_names = {
0x000: "Без сжатия",
0x020: "XOR",
0x040: "FRES",
0x060: "FRES+XOR",
0x080: "Zlib",
0x0A0: "Zlib+XOR",
0x0E0: "Huffman"
}
pack_method = entry['pack_method'] & 0x1E0
pack_name = pack_names.get(pack_method, f"Неизвестный (0x{pack_method:X})")
ratio = 100.0 * entry['packed_size'] / entry['unpacked_size'] if entry['unpacked_size'] > 0 else 0
return {
'name': entry['name'],
'size': entry['unpacked_size'],
'packed': entry['packed_size'],
'compression': pack_name,
'ratio': f"{ratio:.1f}%",
'crc32': f"0x{entry['crc32']:08X}"
}
```
## Заключение
Формат NRes представляет собой эффективный архив с поддержкой множества методов сжатия.

View File

@@ -1,16 +0,0 @@
[package]
name = "libnres"
version = "0.1.4"
description = "Library for NRes files"
authors = ["Valentin Popov <valentin@popov.link>"]
homepage = "https://git.popov.link/valentineus/fparkan"
repository = "https://git.popov.link/valentineus/fparkan.git"
license = "GPL-2.0"
edition = "2021"
keywords = ["gamedev", "library", "nres"]
[dependencies]
byteorder = "1.4"
log = "0.4"
miette = "5.6"
thiserror = "1.0"

View File

@@ -1,25 +0,0 @@
# Library for NRes files (Deprecated)
Library for viewing and retrieving game resources of the game **"Parkan: Iron Strategy"**.
All versions of the game are supported: Demo, IS, IS: Part 1, IS: Part 2.
Supports files with `lib`, `trf`, `rlb` extensions.
The files `gamefont.rlb` and `sprites.lib` are not supported.
This files have an unknown signature.
## Example
Example of extracting game resources:
```rust
fn main() {
let file = std::fs::File::open("./voices.lib").unwrap();
// Extracting the list of files
let list = libnres::reader::get_list(&file).unwrap();
for element in list {
// Extracting the contents of the file
let data = libnres::reader::get_file(&file, &element).unwrap();
}
}
```

View File

@@ -1,33 +0,0 @@
use crate::error::ConverterError;
/// Method for converting u32 to u64.
pub fn u32_to_u64(value: u32) -> Result<u64, ConverterError> {
match u64::try_from(value) {
Err(error) => Err(ConverterError::Infallible(error)),
Ok(result) => Ok(result),
}
}
/// Method for converting u32 to usize.
pub fn u32_to_usize(value: u32) -> Result<usize, ConverterError> {
match usize::try_from(value) {
Err(error) => Err(ConverterError::TryFromIntError(error)),
Ok(result) => Ok(result),
}
}
/// Method for converting u64 to u32.
pub fn u64_to_u32(value: u64) -> Result<u32, ConverterError> {
match u32::try_from(value) {
Err(error) => Err(ConverterError::TryFromIntError(error)),
Ok(result) => Ok(result),
}
}
/// Method for converting usize to u32.
pub fn usize_to_u32(value: usize) -> Result<u32, ConverterError> {
match u32::try_from(value) {
Err(error) => Err(ConverterError::TryFromIntError(error)),
Ok(result) => Ok(result),
}
}

View File

@@ -1,45 +0,0 @@
extern crate miette;
extern crate thiserror;
use miette::Diagnostic;
use thiserror::Error;
#[derive(Error, Diagnostic, Debug)]
pub enum ConverterError {
#[error("error converting an value")]
#[diagnostic(code(libnres::infallible))]
Infallible(#[from] std::convert::Infallible),
#[error("error converting an value")]
#[diagnostic(code(libnres::try_from_int_error))]
TryFromIntError(#[from] std::num::TryFromIntError),
}
#[derive(Error, Diagnostic, Debug)]
pub enum ReaderError {
#[error(transparent)]
#[diagnostic(code(libnres::convert_error))]
ConvertValue(#[from] ConverterError),
#[error("incorrect header format")]
#[diagnostic(code(libnres::list_type_error))]
IncorrectHeader,
#[error("incorrect file size (expected {expected:?} bytes, received {received:?} bytes)")]
#[diagnostic(code(libnres::file_size_error))]
IncorrectSizeFile { expected: u32, received: u32 },
#[error(
"incorrect size of the file list (not a multiple of {expected:?}, received {received:?})"
)]
#[diagnostic(code(libnres::list_size_error))]
IncorrectSizeList { expected: u32, received: u32 },
#[error("resource file reading error")]
#[diagnostic(code(libnres::io_error))]
ReadFile(#[from] std::io::Error),
#[error("file is too small (must be at least {expected:?} bytes, received {received:?} byte)")]
#[diagnostic(code(libnres::file_size_error))]
SmallFile { expected: u32, received: u32 },
}

View File

@@ -1,24 +0,0 @@
/// First constant value of the NRes file ("NRes" characters in numeric)
pub const FILE_TYPE_1: u32 = 1936020046;
/// Second constant value of the NRes file
pub const FILE_TYPE_2: u32 = 256;
/// Size of the element item (in bytes)
pub const LIST_ELEMENT_SIZE: u32 = 64;
/// Minimum allowed file size (in bytes)
pub const MINIMUM_FILE_SIZE: u32 = 16;
static DEBUG: std::sync::atomic::AtomicBool = std::sync::atomic::AtomicBool::new(false);
mod converter;
mod error;
pub mod reader;
/// Get debug status value
pub fn get_debug() -> bool {
DEBUG.load(std::sync::atomic::Ordering::Relaxed)
}
/// Change debug status value
pub fn set_debug(value: bool) {
DEBUG.store(value, std::sync::atomic::Ordering::Relaxed)
}

View File

@@ -1,227 +0,0 @@
use std::io::{Read, Seek};
use byteorder::ByteOrder;
use crate::error::ReaderError;
use crate::{converter, FILE_TYPE_1, FILE_TYPE_2, LIST_ELEMENT_SIZE, MINIMUM_FILE_SIZE};
#[derive(Debug)]
pub struct ListElement {
/// Unknown parameter
_unknown0: i32,
/// Unknown parameter
_unknown1: i32,
/// Unknown parameter
_unknown2: i32,
/// File extension
pub extension: String,
/// Identifier or sequence number
pub index: u32,
/// File name
pub name: String,
/// Position in the file
pub position: u32,
/// File size (in bytes)
pub size: u32,
}
impl ListElement {
/// Get full name of the file
pub fn get_filename(&self) -> String {
format!("{}.{}", self.name, self.extension)
}
}
#[derive(Debug)]
pub struct FileHeader {
/// File size
size: u32,
/// Number of files
total: u32,
/// First constant value
type1: u32,
/// Second constant value
type2: u32,
}
/// Get a packed file data
pub fn get_file(file: &std::fs::File, element: &ListElement) -> Result<Vec<u8>, ReaderError> {
let size = get_file_size(file)?;
check_file_size(size)?;
let header = get_file_header(file)?;
check_file_header(&header, size)?;
let data = get_element_data(file, element)?;
Ok(data)
}
/// Get a list of packed files
pub fn get_list(file: &std::fs::File) -> Result<Vec<ListElement>, ReaderError> {
let mut list: Vec<ListElement> = Vec::new();
let size = get_file_size(file)?;
check_file_size(size)?;
let header = get_file_header(file)?;
check_file_header(&header, size)?;
get_file_list(file, &header, &mut list)?;
Ok(list)
}
fn check_file_header(header: &FileHeader, size: u32) -> Result<(), ReaderError> {
if header.type1 != FILE_TYPE_1 || header.type2 != FILE_TYPE_2 {
return Err(ReaderError::IncorrectHeader);
}
if header.size != size {
return Err(ReaderError::IncorrectSizeFile {
expected: size,
received: header.size,
});
}
Ok(())
}
fn check_file_size(size: u32) -> Result<(), ReaderError> {
if size < MINIMUM_FILE_SIZE {
return Err(ReaderError::SmallFile {
expected: MINIMUM_FILE_SIZE,
received: size,
});
}
Ok(())
}
fn get_element_data(file: &std::fs::File, element: &ListElement) -> Result<Vec<u8>, ReaderError> {
let position = converter::u32_to_u64(element.position)?;
let size = converter::u32_to_usize(element.size)?;
let mut reader = std::io::BufReader::new(file);
let mut buffer = vec![0u8; size];
if let Err(error) = reader.seek(std::io::SeekFrom::Start(position)) {
return Err(ReaderError::ReadFile(error));
};
if let Err(error) = reader.read_exact(&mut buffer) {
return Err(ReaderError::ReadFile(error));
};
Ok(buffer)
}
fn get_element_position(index: u32) -> Result<(usize, usize), ReaderError> {
let from = converter::u32_to_usize(index * LIST_ELEMENT_SIZE)?;
let to = converter::u32_to_usize((index * LIST_ELEMENT_SIZE) + LIST_ELEMENT_SIZE)?;
Ok((from, to))
}
fn get_file_header(file: &std::fs::File) -> Result<FileHeader, ReaderError> {
let mut reader = std::io::BufReader::new(file);
let mut buffer = vec![0u8; MINIMUM_FILE_SIZE as usize];
if let Err(error) = reader.seek(std::io::SeekFrom::Start(0)) {
return Err(ReaderError::ReadFile(error));
};
if let Err(error) = reader.read_exact(&mut buffer) {
return Err(ReaderError::ReadFile(error));
};
let header = FileHeader {
size: byteorder::LittleEndian::read_u32(&buffer[12..16]),
total: byteorder::LittleEndian::read_u32(&buffer[8..12]),
type1: byteorder::LittleEndian::read_u32(&buffer[0..4]),
type2: byteorder::LittleEndian::read_u32(&buffer[4..8]),
};
buffer.clear();
Ok(header)
}
fn get_file_list(
file: &std::fs::File,
header: &FileHeader,
list: &mut Vec<ListElement>,
) -> Result<(), ReaderError> {
let (start_position, list_size) = get_list_position(header)?;
let mut reader = std::io::BufReader::new(file);
let mut buffer = vec![0u8; list_size];
if let Err(error) = reader.seek(std::io::SeekFrom::Start(start_position)) {
return Err(ReaderError::ReadFile(error));
};
if let Err(error) = reader.read_exact(&mut buffer) {
return Err(ReaderError::ReadFile(error));
}
let buffer_size = converter::usize_to_u32(buffer.len())?;
if buffer_size % LIST_ELEMENT_SIZE != 0 {
return Err(ReaderError::IncorrectSizeList {
expected: LIST_ELEMENT_SIZE,
received: buffer_size,
});
}
for i in 0..(buffer_size / LIST_ELEMENT_SIZE) {
let (from, to) = get_element_position(i)?;
let chunk: &[u8] = &buffer[from..to];
let element = get_list_element(chunk)?;
list.push(element);
}
buffer.clear();
Ok(())
}
fn get_file_size(file: &std::fs::File) -> Result<u32, ReaderError> {
let metadata = match file.metadata() {
Err(error) => return Err(ReaderError::ReadFile(error)),
Ok(value) => value,
};
let result = converter::u64_to_u32(metadata.len())?;
Ok(result)
}
fn get_list_element(buffer: &[u8]) -> Result<ListElement, ReaderError> {
let index = byteorder::LittleEndian::read_u32(&buffer[60..64]);
let position = byteorder::LittleEndian::read_u32(&buffer[56..60]);
let size = byteorder::LittleEndian::read_u32(&buffer[12..16]);
let unknown0 = byteorder::LittleEndian::read_i32(&buffer[4..8]);
let unknown1 = byteorder::LittleEndian::read_i32(&buffer[8..12]);
let unknown2 = byteorder::LittleEndian::read_i32(&buffer[16..20]);
let extension = String::from_utf8_lossy(&buffer[0..4])
.trim_matches(char::from(0))
.to_string();
let name = String::from_utf8_lossy(&buffer[20..56])
.trim_matches(char::from(0))
.to_string();
Ok(ListElement {
_unknown0: unknown0,
_unknown1: unknown1,
_unknown2: unknown2,
extension,
index,
name,
position,
size,
})
}
fn get_list_position(header: &FileHeader) -> Result<(u64, usize), ReaderError> {
let position = converter::u32_to_u64(header.size - (header.total * LIST_ELEMENT_SIZE))?;
let size = converter::u32_to_usize(header.total * LIST_ELEMENT_SIZE)?;
Ok((position, size))
}

36
mkdocs.yml Normal file
View File

@@ -0,0 +1,36 @@
# Project information
site_name: FParkan
site_url: https://fparkan.popov.link/
site_author: Valentin Popov
site_description: >-
Utilities and tools for the game “Parkan: Iron Strategy”.
# Repository
repo_name: valentineus/fparkan
repo_url: https://github.com/valentineus/fparkan
# Copyright
copyright: Copyright &copy; 2023 &mdash; 2024 Valentin Popov
# Configuration
theme:
name: material
language: en
palette:
scheme: slate
# Navigation
nav:
- Home: index.md
- Specs:
- Assets:
- NRes:
- Документация по формату: specs/assets/nres/overview.md
- FRES Декомпрессия: specs/assets/nres/fres_decompression.md
- Huffman Декомпрессия: specs/assets/nres/huffman_decompression.md
# Additional configuration
extra:
social:
- icon: fontawesome/brands/github
link: https://github.com/valentineus/fparkan

View File

@@ -1,20 +0,0 @@
[package]
name = "nres-cli"
version = "0.2.3"
description = "Console tool for NRes files"
authors = ["Valentin Popov <valentin@popov.link>"]
homepage = "https://git.popov.link/valentineus/fparkan"
repository = "https://git.popov.link/valentineus/fparkan.git"
license = "GPL-2.0"
edition = "2021"
keywords = ["cli", "gamedev", "nres"]
[dependencies]
byteorder = "1.4"
clap = { version = "4.2", features = ["derive"] }
console = "0.15"
dialoguer = { version = "0.10", features = ["completion"] }
indicatif = "0.17"
libnres = { version = "0.1", path = "../libnres" }
miette = { version = "5.6", features = ["fancy"] }
tempdir = "0.3"

View File

@@ -1,6 +0,0 @@
# Console tool for NRes files (Deprecated)
## Commands
- `extract` - Extract game resources from a "NRes" file.
- `ls` - Get a list of files in a "NRes" file.

View File

@@ -1,198 +0,0 @@
extern crate core;
extern crate libnres;
use std::io::Write;
use clap::{Parser, Subcommand};
use miette::{IntoDiagnostic, Result};
#[derive(Parser, Debug)]
#[command(name = "NRes CLI")]
#[command(about, author, version, long_about = None)]
struct Cli {
#[command(subcommand)]
command: Commands,
}
#[derive(Subcommand, Debug)]
enum Commands {
/// Check if the "NRes" file can be extract
Check {
/// "NRes" file
file: String,
},
/// Print debugging information on the "NRes" file
#[command(arg_required_else_help = true)]
Debug {
/// "NRes" file
file: String,
/// Filter results by file name
#[arg(long)]
name: Option<String>,
},
/// Extract files or a file from the "NRes" file
#[command(arg_required_else_help = true)]
Extract {
/// "NRes" file
file: String,
/// Overwrite files
#[arg(short, long, default_value_t = false, value_name = "TRUE|FALSE")]
force: bool,
/// Outbound directory
#[arg(short, long, value_name = "DIR")]
out: String,
},
/// Print a list of files in the "NRes" file
#[command(arg_required_else_help = true)]
Ls {
/// "NRes" file
file: String,
},
}
pub fn main() -> Result<()> {
let stdout = console::Term::stdout();
let cli = Cli::parse();
match cli.command {
Commands::Check { file } => command_check(stdout, file)?,
Commands::Debug { file, name } => command_debug(stdout, file, name)?,
Commands::Extract { file, force, out } => command_extract(stdout, file, out, force)?,
Commands::Ls { file } => command_ls(stdout, file)?,
}
Ok(())
}
fn command_check(_stdout: console::Term, file: String) -> Result<()> {
let file = std::fs::File::open(file).into_diagnostic()?;
let list = libnres::reader::get_list(&file).into_diagnostic()?;
let tmp = tempdir::TempDir::new("nres").into_diagnostic()?;
let bar = indicatif::ProgressBar::new(list.len() as u64);
bar.set_style(get_bar_style()?);
for element in list {
bar.set_message(element.get_filename());
let path = tmp.path().join(element.get_filename());
let mut output = std::fs::File::create(path).into_diagnostic()?;
let mut buffer = libnres::reader::get_file(&file, &element).into_diagnostic()?;
output.write_all(&buffer).into_diagnostic()?;
buffer.clear();
bar.inc(1);
}
bar.finish();
Ok(())
}
fn command_debug(stdout: console::Term, file: String, name: Option<String>) -> Result<()> {
let file = std::fs::File::open(file).into_diagnostic()?;
let mut list = libnres::reader::get_list(&file).into_diagnostic()?;
let mut total_files_size: u32 = 0;
let mut total_files_gap: u32 = 0;
let mut total_files: u32 = 0;
for (index, item) in list.iter().enumerate() {
total_files_size += item.size;
total_files += 1;
let mut gap = 0;
if index > 1 {
let previous_item = &list[index - 1];
gap = item.position - (previous_item.position + previous_item.size);
}
total_files_gap += gap;
}
if let Some(name) = name {
list.retain(|item| item.name.contains(&name));
};
for (index, item) in list.iter().enumerate() {
let mut gap = 0;
if index > 1 {
let previous_item = &list[index - 1];
gap = item.position - (previous_item.position + previous_item.size);
}
let text = format!("Index: {};\nGap: {};\nItem: {:#?};\n", index, gap, item);
stdout.write_line(&text).into_diagnostic()?;
}
let text = format!(
"Total files: {};\nTotal files gap: {} (bytes);\nTotal files size: {} (bytes);",
total_files, total_files_gap, total_files_size
);
stdout.write_line(&text).into_diagnostic()?;
Ok(())
}
fn command_extract(_stdout: console::Term, file: String, out: String, force: bool) -> Result<()> {
let file = std::fs::File::open(file).into_diagnostic()?;
let list = libnres::reader::get_list(&file).into_diagnostic()?;
let bar = indicatif::ProgressBar::new(list.len() as u64);
bar.set_style(get_bar_style()?);
for element in list {
bar.set_message(element.get_filename());
let path = format!("{}/{}", out, element.get_filename());
if !force && is_exist_file(&path) {
let message = format!("File \"{}\" exists. Overwrite it?", path);
if !dialoguer::Confirm::new()
.with_prompt(message)
.interact()
.into_diagnostic()?
{
continue;
}
}
let mut output = std::fs::File::create(path).into_diagnostic()?;
let mut buffer = libnres::reader::get_file(&file, &element).into_diagnostic()?;
output.write_all(&buffer).into_diagnostic()?;
buffer.clear();
bar.inc(1);
}
bar.finish();
Ok(())
}
fn command_ls(stdout: console::Term, file: String) -> Result<()> {
let file = std::fs::File::open(file).into_diagnostic()?;
let list = libnres::reader::get_list(&file).into_diagnostic()?;
for element in list {
stdout.write_line(&element.name).into_diagnostic()?;
}
Ok(())
}
fn get_bar_style() -> Result<indicatif::ProgressStyle> {
Ok(
indicatif::ProgressStyle::with_template("[{bar:32}] {pos:>7}/{len:7} {msg}")
.into_diagnostic()?
.progress_chars("=>-"),
)
}
fn is_exist_file(path: &String) -> bool {
let metadata = std::path::Path::new(path);
metadata.exists()
}

View File

@@ -1,9 +0,0 @@
[package]
name = "packer"
version = "0.1.0"
edition = "2021"
[dependencies]
byteorder = "1.4.3"
serde = { version = "1.0.160", features = ["derive"] }
serde_json = "1.0.96"

View File

@@ -1,27 +0,0 @@
# NRes Game Resource Packer
At the moment, this is a demonstration of the NRes game resource packing algorithm in action.
It packs 100% of the NRes game resources for the game "Parkan: Iron Strategy".
The hash sums of the resulting files match the original game files.
__Attention!__
This is a test version of the utility. It overwrites the specified final file without asking.
## Building
To build the tools, you need to run the following command in the root directory:
```bash
cargo build --release
```
## Running
You can run the utility with the following command:
```bash
./target/release/packer /path/to/unpack /path/to/file.ex
```
- `/path/to/unpack`: This is the directory with the resources unpacked by the [unpacker](../unpacker) utility.
- `/path/to/file.ex`: This is the final file that will be created.

View File

@@ -1,175 +0,0 @@
use std::env;
use std::{
fs::{self, File},
io::{BufReader, Read},
};
use byteorder::{ByteOrder, LittleEndian};
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize, Debug)]
pub struct ImportListElement {
pub extension: String,
pub index: u32,
pub name: String,
pub unknown0: u32,
pub unknown1: u32,
pub unknown2: u32,
}
#[derive(Debug)]
pub struct ListElement {
pub extension: String,
pub index: u32,
pub name: String,
pub position: u32,
pub size: u32,
pub unknown0: u32,
pub unknown1: u32,
pub unknown2: u32,
}
fn main() {
let args: Vec<String> = env::args().collect();
let input = &args[1];
let output = &args[2];
pack(String::from(input), String::from(output));
}
fn pack(input: String, output: String) {
// Загружаем индекс-файл
let index_file = format!("{}/{}", input, "index.json");
let data = fs::read_to_string(index_file).unwrap();
let list: Vec<ImportListElement> = serde_json::from_str(&data).unwrap();
// Общий буфер хранения файлов
let mut content_buffer: Vec<u8> = Vec::new();
let mut list_buffer: Vec<u8> = Vec::new();
// Общее количество файлов
let total_files: u32 = list.len() as u32;
for (index, item) in list.iter().enumerate() {
// Открываем дескриптор файла
let path = format!("{}/{}.{}", input, item.name, item.index);
let file = File::open(path).unwrap();
let metadata = file.metadata().unwrap();
// Считываем файл в буфер
let mut reader = BufReader::new(file);
let mut file_buffer: Vec<u8> = Vec::new();
reader.read_to_end(&mut file_buffer).unwrap();
// Выравнивание буфера
if index != 0 {
while content_buffer.len() % 8 != 0 {
content_buffer.push(0);
}
}
// Получение позиции файла
let position = content_buffer.len() + 16;
// Записываем файл в буфер
content_buffer.extend(file_buffer);
// Формируем элемент
let element = ListElement {
extension: item.extension.to_string(),
index: item.index,
name: item.name.to_string(),
position: position as u32,
size: metadata.len() as u32,
unknown0: item.unknown0,
unknown1: item.unknown1,
unknown2: item.unknown2,
};
// Создаем буфер из элемента
let mut element_buffer: Vec<u8> = Vec::new();
// Пишем тип файла
let mut extension_buffer: [u8; 4] = [0; 4];
let mut file_extension_buffer = element.extension.into_bytes();
file_extension_buffer.resize(4, 0);
extension_buffer.copy_from_slice(&file_extension_buffer);
element_buffer.extend(extension_buffer);
// Пишем неизвестное значение #1
let mut unknown0_buffer: [u8; 4] = [0; 4];
LittleEndian::write_u32(&mut unknown0_buffer, element.unknown0);
element_buffer.extend(unknown0_buffer);
// Пишем неизвестное значение #2
let mut unknown1_buffer: [u8; 4] = [0; 4];
LittleEndian::write_u32(&mut unknown1_buffer, element.unknown1);
element_buffer.extend(unknown1_buffer);
// Пишем размер файла
let mut file_size_buffer: [u8; 4] = [0; 4];
LittleEndian::write_u32(&mut file_size_buffer, element.size);
element_buffer.extend(file_size_buffer);
// Пишем неизвестное значение #3
let mut unknown2_buffer: [u8; 4] = [0; 4];
LittleEndian::write_u32(&mut unknown2_buffer, element.unknown2);
element_buffer.extend(unknown2_buffer);
// Пишем название файла
let mut name_buffer: [u8; 36] = [0; 36];
let mut file_name_buffer = element.name.into_bytes();
file_name_buffer.resize(36, 0);
name_buffer.copy_from_slice(&file_name_buffer);
element_buffer.extend(name_buffer);
// Пишем позицию файла
let mut position_buffer: [u8; 4] = [0; 4];
LittleEndian::write_u32(&mut position_buffer, element.position);
element_buffer.extend(position_buffer);
// Пишем индекс файла
let mut index_buffer: [u8; 4] = [0; 4];
LittleEndian::write_u32(&mut index_buffer, element.index);
element_buffer.extend(index_buffer);
// Добавляем итоговый буфер в буфер элементов списка
list_buffer.extend(element_buffer);
}
// Выравнивание буфера
while content_buffer.len() % 8 != 0 {
content_buffer.push(0);
}
let mut header_buffer: Vec<u8> = Vec::new();
// Пишем первый тип файла
let mut header_type_1 = [0; 4];
LittleEndian::write_u32(&mut header_type_1, 1936020046_u32);
header_buffer.extend(header_type_1);
// Пишем второй тип файла
let mut header_type_2 = [0; 4];
LittleEndian::write_u32(&mut header_type_2, 256_u32);
header_buffer.extend(header_type_2);
// Пишем количество файлов
let mut header_total_files = [0; 4];
LittleEndian::write_u32(&mut header_total_files, total_files);
header_buffer.extend(header_total_files);
// Пишем общий размер файла
let mut header_total_size = [0; 4];
let total_size: u32 = ((content_buffer.len() + 16) as u32) + (total_files * 64);
LittleEndian::write_u32(&mut header_total_size, total_size);
header_buffer.extend(header_total_size);
let mut result_buffer: Vec<u8> = Vec::new();
result_buffer.extend(header_buffer);
result_buffer.extend(content_buffer);
result_buffer.extend(list_buffer);
fs::write(output, result_buffer).unwrap();
}

6
renovate.config.cjs Normal file
View File

@@ -0,0 +1,6 @@
module.exports = {
endpoint: "https://code.popov.link",
gitAuthor: "renovate[bot] <renovatebot@noreply.localhost>",
optimizeForDisabled: true,
platform: "gitea",
};

1
requirements.txt Normal file
View File

@@ -0,0 +1 @@
mkdocs-material

View File

@@ -1,8 +0,0 @@
[package]
name = "texture-decoder"
version = "0.1.0"
edition = "2021"
[dependencies]
byteorder = "1.4.3"
image = "0.24.7"

View File

@@ -1,13 +0,0 @@
# Декодировщик текстур
Сборка:
```bash
cargo build --release
```
Запуск:
```bash
./target/release/texture-decoder ./out/AIM_02.0 ./out/AIM_02.0.png
```

View File

@@ -1,41 +0,0 @@
use std::io::Read;
use byteorder::ReadBytesExt;
use image::Rgba;
fn decode_texture(file_path: &str, output_path: &str) -> Result<(), std::io::Error> {
// Читаем файл
let mut file = std::fs::File::open(file_path)?;
let mut buffer: Vec<u8> = Vec::new();
file.read_to_end(&mut buffer)?;
// Декодируем метаданные
let mut cursor = std::io::Cursor::new(&buffer[4..]);
let img_width = cursor.read_u32::<byteorder::LittleEndian>()?;
let img_height = cursor.read_u32::<byteorder::LittleEndian>()?;
// Пропустить оставшиеся байты метаданных
cursor.set_position(20);
// Извлекаем данные изображения
let image_data = buffer[cursor.position() as usize..].to_vec();
let img =
image::ImageBuffer::<Rgba<u8>, _>::from_raw(img_width, img_height, image_data.to_vec())
.expect("Failed to decode image");
// Сохраняем изображение
img.save(output_path).unwrap();
Ok(())
}
fn main() {
let args: Vec<String> = std::env::args().collect();
let input = &args[1];
let output = &args[2];
if let Err(err) = decode_texture(input, output) {
eprintln!("Error: {}", err)
}
}

View File

@@ -1,9 +0,0 @@
[package]
name = "unpacker"
version = "0.1.1"
edition = "2021"
[dependencies]
byteorder = "1.4.3"
serde = { version = "1.0.160", features = ["derive"] }
serde_json = "1.0.96"

View File

@@ -1,41 +0,0 @@
# NRes Game Resource Unpacker
At the moment, this is a demonstration of the NRes game resource unpacking algorithm in action.
It unpacks 100% of the NRes game resources for the game "Parkan: Iron Strategy".
The unpacked resources can be packed again using the [packer](../packer) utility and replace the original game files.
__Attention!__
This is a test version of the utility.
It overwrites existing files without asking.
## Building
To build the tools, you need to run the following command in the root directory:
```bash
cargo build --release
```
## Running
You can run the utility with the following command:
```bash
./target/release/unpacker /path/to/file.ex /path/to/output
```
- `/path/to/file.ex`: This is the file containing the game resources that will be unpacked.
- `/path/to/output`: This is the directory where the unpacked files will be placed.
## How it Works
The structure describing the packed game resources is not fully understood yet.
Therefore, the utility saves unpacked files in the format `file_name.file_index` because some files have the same name.
Additionally, an `index.json` file is created, which is important for re-packing the files.
This file lists all the fields that game resources have in their packed form.
It is essential to preserve the file index for the game to function correctly, as the game engine looks for the necessary files by index.
Files can be replaced and packed back using the [packer](../packer).
The newly obtained game resource files are correctly processed by the game engine.
For example, sounds and 3D models of warbots' weapons were successfully replaced.

View File

@@ -1,124 +0,0 @@
use std::env;
use std::fs::File;
use std::io::{BufReader, BufWriter, Read, Seek, SeekFrom, Write};
use byteorder::{ByteOrder, LittleEndian};
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize, Debug)]
pub struct FileHeader {
pub size: u32,
pub total: u32,
pub type1: u32,
pub type2: u32,
}
#[derive(Serialize, Deserialize, Debug)]
pub struct ListElement {
pub extension: String,
pub index: u32,
pub name: String,
#[serde(skip_serializing)]
pub position: u32,
#[serde(skip_serializing)]
pub size: u32,
pub unknown0: u32,
pub unknown1: u32,
pub unknown2: u32,
}
fn main() {
let args: Vec<String> = env::args().collect();
let input = &args[1];
let output = &args[2];
unpack(String::from(input), String::from(output));
}
fn unpack(input: String, output: String) {
let file = File::open(input).unwrap();
let metadata = file.metadata().unwrap();
let mut reader = BufReader::new(file);
let mut list: Vec<ListElement> = Vec::new();
// Считываем заголовок файла
let mut header_buffer = [0u8; 16];
reader.seek(SeekFrom::Start(0)).unwrap();
reader.read_exact(&mut header_buffer).unwrap();
let file_header = FileHeader {
size: LittleEndian::read_u32(&header_buffer[12..16]),
total: LittleEndian::read_u32(&header_buffer[8..12]),
type1: LittleEndian::read_u32(&header_buffer[0..4]),
type2: LittleEndian::read_u32(&header_buffer[4..8]),
};
if file_header.type1 != 1936020046 || file_header.type2 != 256 {
panic!("this isn't NRes file");
}
if metadata.len() != file_header.size as u64 {
panic!("incorrect size")
}
// Считываем список файлов
let list_files_start_position = file_header.size - (file_header.total * 64);
let list_files_size = file_header.total * 64;
let mut list_buffer = vec![0u8; list_files_size as usize];
reader
.seek(SeekFrom::Start(list_files_start_position as u64))
.unwrap();
reader.read_exact(&mut list_buffer).unwrap();
if list_buffer.len() % 64 != 0 {
panic!("invalid files list")
}
for i in 0..(list_buffer.len() / 64) {
let from = i * 64;
let to = (i * 64) + 64;
let chunk: &[u8] = &list_buffer[from..to];
let element_list = ListElement {
extension: String::from_utf8_lossy(&chunk[0..4])
.trim_matches(char::from(0))
.to_string(),
index: LittleEndian::read_u32(&chunk[60..64]),
name: String::from_utf8_lossy(&chunk[20..56])
.trim_matches(char::from(0))
.to_string(),
position: LittleEndian::read_u32(&chunk[56..60]),
size: LittleEndian::read_u32(&chunk[12..16]),
unknown0: LittleEndian::read_u32(&chunk[4..8]),
unknown1: LittleEndian::read_u32(&chunk[8..12]),
unknown2: LittleEndian::read_u32(&chunk[16..20]),
};
list.push(element_list)
}
// Распаковываем файлы в директорию
for element in &list {
let path = format!("{}/{}.{}", output, element.name, element.index);
let mut file = File::create(path).unwrap();
let mut file_buffer = vec![0u8; element.size as usize];
reader
.seek(SeekFrom::Start(element.position as u64))
.unwrap();
reader.read_exact(&mut file_buffer).unwrap();
file.write_all(&file_buffer).unwrap();
file_buffer.clear();
}
// Выгрузка списка файлов в JSON
let path = format!("{}/{}", output, "index.json");
let file = File::create(path).unwrap();
let mut writer = BufWriter::new(file);
serde_json::to_writer_pretty(&mut writer, &list).unwrap();
writer.flush().unwrap();
}

View File

@@ -1 +0,0 @@
{"files":{"CHANGELOG.md":"ef9fa958318e442f1da7d204494cefec75c144aa6d5d5c93b0a5d6fcdf4ef6c6","Cargo.lock":"20b23c454fc3127f08a1bcd2864bbf029793759e6411fba24d44d8f4b7831ad0","Cargo.toml":"d0f15fde73d42bdf00e93f960dff908447225bede9364cb1659e44740a536c04","LICENSE-APACHE":"a60eea817514531668d7e00765731449fe14d059d3249e0bc93b36de45f759f2","LICENSE-MIT":"e99d88d232bf57d70f0fb87f6b496d44b6653f99f8a63d250a54c61ea4bcde40","README.md":"76d28502bd2e83f6a9e3576bd45e9a7fe5308448c4b5384b0d249515b5f67a5c","bench.plot.r":"6a5d7a4d36ed6b3d9919be703a479bef47698bf947818b483ff03951df2d4e01","benchmark.sh":"b35f89b1ca2c1dc0476cdd07f0284b72d41920d1c7b6054072f50ffba296d78d","coverage.sh":"4677e81922d08a82e83068a911717a247c66af12e559f37b78b6be3337ac9f07","examples/addr2line.rs":"3c5eb5a6726634df6cf53e4d67ee9f90c9ac09838303947f45c3bea1e84548b5","rustfmt.toml":"01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b","src/builtin_split_dwarf_loader.rs":"dc6979de81b35f82e97275e6be27ec61f3c4225ea10574a9e031813e00185174","src/function.rs":"68f047e0c78afe18ad165db255c8254ee74c35cd6df0cc07e400252981f661ed","src/lazy.rs":"0bf23f7098f1902f181e43c2ffa82a3f86df2c0dbcb9bc0ebce6a0168dd8b060","src/lib.rs":"9d6531f71fd138d31cc7596db9ab234198d0895a21ea9cb116434c19ec78b660","tests/correctness.rs":"4081f8019535305e3aa254c6a4e1436272dd873f9717c687ca0e66ea8d5871ed","tests/output_equivalence.rs":"b2cd7c59fa55808a2e66e9fe7f160d846867e3ecefe22c22a818f822c3c41f23","tests/parse.rs":"c2f7362e4679c1b4803b12ec6e8dca6da96aed7273fd210a857524a4182c30e7"},"package":"8a30b2e23b9e17a9f90641c7ab1549cd9b44f296d3ccbf309d2863cfe398a0cb"}

View File

@@ -1,336 +0,0 @@
# `addr2line` Change Log
--------------------------------------------------------------------------------
## 0.21.0 (2023/08/12)
### Breaking changes
* Updated `gimli`, `object`, and `fallible-iterator` dependencies.
### Changed
* The minimum supported rust version is 1.65.0.
* Store boxed slices instead of `Vec` objects in `Context`.
[#278](https://github.com/gimli-rs/addr2line/pull/278)
--------------------------------------------------------------------------------
## 0.20.0 (2023/04/15)
### Breaking changes
* The minimum supported rust version is 1.58.0.
* Changed `Context::find_frames` to return `LookupResult`.
Use `LookupResult::skip_all_loads` to obtain the result without loading split DWARF.
[#260](https://github.com/gimli-rs/addr2line/pull/260)
* Replaced `Context::find_dwarf_unit` with `Context::find_dwarf_and_unit`.
[#260](https://github.com/gimli-rs/addr2line/pull/260)
* Updated `object` dependency.
### Changed
* Fix handling of file index 0 for DWARF 5.
[#264](https://github.com/gimli-rs/addr2line/pull/264)
### Added
* Added types and methods to support loading split DWARF:
`LookupResult`, `SplitDwarfLoad`, `SplitDwarfLoader`, `Context::preload_units`.
[#260](https://github.com/gimli-rs/addr2line/pull/260)
[#262](https://github.com/gimli-rs/addr2line/pull/262)
[#263](https://github.com/gimli-rs/addr2line/pull/263)
--------------------------------------------------------------------------------
## 0.19.0 (2022/11/24)
### Breaking changes
* Updated `gimli` and `object` dependencies.
--------------------------------------------------------------------------------
## 0.18.0 (2022/07/16)
### Breaking changes
* Updated `object` dependency.
### Changed
* Fixed handling of relative path for `DW_AT_comp_dir`.
[#239](https://github.com/gimli-rs/addr2line/pull/239)
* Fixed handling of `DW_FORM_addrx` for DWARF 5 support.
[#243](https://github.com/gimli-rs/addr2line/pull/243)
* Fixed handling of units that are missing range information.
[#249](https://github.com/gimli-rs/addr2line/pull/249)
--------------------------------------------------------------------------------
## 0.17.0 (2021/10/24)
### Breaking changes
* Updated `gimli` and `object` dependencies.
### Changed
* Use `skip_attributes` to improve performance.
[#236](https://github.com/gimli-rs/addr2line/pull/236)
--------------------------------------------------------------------------------
## 0.16.0 (2021/07/26)
### Breaking changes
* Updated `gimli` and `object` dependencies.
--------------------------------------------------------------------------------
## 0.15.2 (2021/06/04)
### Fixed
* Allow `Context` to be `Send`.
[#219](https://github.com/gimli-rs/addr2line/pull/219)
--------------------------------------------------------------------------------
## 0.15.1 (2021/05/02)
### Fixed
* Don't ignore aranges with address 0.
[#217](https://github.com/gimli-rs/addr2line/pull/217)
--------------------------------------------------------------------------------
## 0.15.0 (2021/05/02)
### Breaking changes
* Updated `gimli` and `object` dependencies.
[#215](https://github.com/gimli-rs/addr2line/pull/215)
* Added `debug_aranges` parameter to `Context::from_sections`.
[#200](https://github.com/gimli-rs/addr2line/pull/200)
### Added
* Added `.debug_aranges` support.
[#200](https://github.com/gimli-rs/addr2line/pull/200)
* Added supplementary object file support.
[#208](https://github.com/gimli-rs/addr2line/pull/208)
### Fixed
* Fixed handling of Windows paths in locations.
[#209](https://github.com/gimli-rs/addr2line/pull/209)
* examples/addr2line: Flush stdout after each response.
[#210](https://github.com/gimli-rs/addr2line/pull/210)
* examples/addr2line: Avoid copying every section.
[#213](https://github.com/gimli-rs/addr2line/pull/213)
--------------------------------------------------------------------------------
## 0.14.1 (2020/12/31)
### Fixed
* Fix location lookup for skeleton units.
[#201](https://github.com/gimli-rs/addr2line/pull/201)
### Added
* Added `Context::find_location_range`.
[#196](https://github.com/gimli-rs/addr2line/pull/196)
[#199](https://github.com/gimli-rs/addr2line/pull/199)
--------------------------------------------------------------------------------
## 0.14.0 (2020/10/27)
### Breaking changes
* Updated `gimli` and `object` dependencies.
### Fixed
* Handle units that only have line information.
[#188](https://github.com/gimli-rs/addr2line/pull/188)
* Handle DWARF units with version <= 4 and no `DW_AT_name`.
[#191](https://github.com/gimli-rs/addr2line/pull/191)
* Fix handling of `DW_FORM_ref_addr`.
[#193](https://github.com/gimli-rs/addr2line/pull/193)
--------------------------------------------------------------------------------
## 0.13.0 (2020/07/07)
### Breaking changes
* Updated `gimli` and `object` dependencies.
* Added `rustc-dep-of-std` feature.
[#166](https://github.com/gimli-rs/addr2line/pull/166)
### Changed
* Improve performance by parsing function contents lazily.
[#178](https://github.com/gimli-rs/addr2line/pull/178)
* Don't skip `.debug_info` and `.debug_line` entries with a zero address.
[#182](https://github.com/gimli-rs/addr2line/pull/182)
--------------------------------------------------------------------------------
## 0.12.2 (2020/06/21)
### Fixed
* Avoid linear search for `DW_FORM_ref_addr`.
[#175](https://github.com/gimli-rs/addr2line/pull/175)
--------------------------------------------------------------------------------
## 0.12.1 (2020/05/19)
### Fixed
* Handle units with overlapping address ranges.
[#163](https://github.com/gimli-rs/addr2line/pull/163)
* Don't assert for functions with overlapping address ranges.
[#168](https://github.com/gimli-rs/addr2line/pull/168)
--------------------------------------------------------------------------------
## 0.12.0 (2020/05/12)
### Breaking changes
* Updated `gimli` and `object` dependencies.
* Added more optional features: `smallvec` and `fallible-iterator`.
[#160](https://github.com/gimli-rs/addr2line/pull/160)
### Added
* Added `Context::dwarf` and `Context::find_dwarf_unit`.
[#159](https://github.com/gimli-rs/addr2line/pull/159)
### Changed
* Removed `lazycell` dependency.
[#160](https://github.com/gimli-rs/addr2line/pull/160)
--------------------------------------------------------------------------------
## 0.11.0 (2020/01/11)
### Breaking changes
* Updated `gimli` and `object` dependencies.
* [#130](https://github.com/gimli-rs/addr2line/pull/130)
Changed `Location::file` from `Option<String>` to `Option<&str>`.
This required adding lifetime parameters to `Location` and other structs that
contain it.
* [#152](https://github.com/gimli-rs/addr2line/pull/152)
Changed `Location::line` and `Location::column` from `Option<u64>`to `Option<u32>`.
* [#156](https://github.com/gimli-rs/addr2line/pull/156)
Deleted `alloc` feature, and fixed `no-std` builds with stable rust.
Removed default `Reader` parameter for `Context`, and added `ObjectContext` instead.
### Added
* [#134](https://github.com/gimli-rs/addr2line/pull/134)
Added `Context::from_dwarf`.
### Changed
* [#133](https://github.com/gimli-rs/addr2line/pull/133)
Fixed handling of units that can't be parsed.
* [#155](https://github.com/gimli-rs/addr2line/pull/155)
Fixed `addr2line` output to match binutils.
* [#130](https://github.com/gimli-rs/addr2line/pull/130)
Improved `.debug_line` parsing performance.
* [#148](https://github.com/gimli-rs/addr2line/pull/148)
[#150](https://github.com/gimli-rs/addr2line/pull/150)
[#151](https://github.com/gimli-rs/addr2line/pull/151)
[#152](https://github.com/gimli-rs/addr2line/pull/152)
Improved `.debug_info` parsing performance.
* [#137](https://github.com/gimli-rs/addr2line/pull/137)
[#138](https://github.com/gimli-rs/addr2line/pull/138)
[#139](https://github.com/gimli-rs/addr2line/pull/139)
[#140](https://github.com/gimli-rs/addr2line/pull/140)
[#146](https://github.com/gimli-rs/addr2line/pull/146)
Improved benchmarks.
--------------------------------------------------------------------------------
## 0.10.0 (2019/07/07)
### Breaking changes
* [#127](https://github.com/gimli-rs/addr2line/pull/127)
Update `gimli`.
--------------------------------------------------------------------------------
## 0.9.0 (2019/05/02)
### Breaking changes
* [#121](https://github.com/gimli-rs/addr2line/pull/121)
Update `gimli`, `object`, and `fallible-iterator` dependencies.
### Added
* [#121](https://github.com/gimli-rs/addr2line/pull/121)
Reexport `gimli`, `object`, and `fallible-iterator`.
--------------------------------------------------------------------------------
## 0.8.0 (2019/02/06)
### Breaking changes
* [#107](https://github.com/gimli-rs/addr2line/pull/107)
Update `object` dependency to 0.11. This is part of the public API.
### Added
* [#101](https://github.com/gimli-rs/addr2line/pull/101)
Add `object` feature (enabled by default). Disable this feature to remove
the `object` dependency and `Context::new` API.
* [#102](https://github.com/gimli-rs/addr2line/pull/102)
Add `std` (enabled by default) and `alloc` features.
### Changed
* [#108](https://github.com/gimli-rs/addr2line/issues/108)
`demangle` no longer outputs the hash for rust symbols.
* [#109](https://github.com/gimli-rs/addr2line/issues/109)
Set default `R` for `Context<R>`.

704
vendor/addr2line/Cargo.lock generated vendored
View File

@@ -1,704 +0,0 @@
# This file is automatically @generated by Cargo.
# It is not intended for manual editing.
version = 3
[[package]]
name = "addr2line"
version = "0.19.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a76fd60b23679b7d19bd066031410fb7e458ccc5e958eb5c325888ce4baedc97"
dependencies = [
"gimli 0.27.2",
]
[[package]]
name = "addr2line"
version = "0.21.0"
dependencies = [
"backtrace",
"clap",
"compiler_builtins",
"cpp_demangle",
"fallible-iterator",
"findshlibs",
"gimli 0.28.0",
"libtest-mimic",
"memmap2",
"object 0.32.0",
"rustc-demangle",
"rustc-std-workspace-alloc",
"rustc-std-workspace-core",
"smallvec",
"typed-arena",
]
[[package]]
name = "adler"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f26201604c87b1e01bd3d98f8d5d9a8fcbb815e8cedb41ffccbeb4bf593a35fe"
[[package]]
name = "anstream"
version = "0.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0ca84f3628370c59db74ee214b3263d58f9aadd9b4fe7e711fd87dc452b7f163"
dependencies = [
"anstyle",
"anstyle-parse",
"anstyle-query",
"anstyle-wincon",
"colorchoice",
"is-terminal",
"utf8parse",
]
[[package]]
name = "anstyle"
version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3a30da5c5f2d5e72842e00bcb57657162cdabef0931f40e2deb9b4140440cecd"
[[package]]
name = "anstyle-parse"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "938874ff5980b03a87c5524b3ae5b59cf99b1d6bc836848df7bc5ada9643c333"
dependencies = [
"utf8parse",
]
[[package]]
name = "anstyle-query"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5ca11d4be1bab0c8bc8734a9aa7bf4ee8316d462a08c6ac5052f888fef5b494b"
dependencies = [
"windows-sys",
]
[[package]]
name = "anstyle-wincon"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c677ab05e09154296dd37acecd46420c17b9713e8366facafa8fc0885167cf4c"
dependencies = [
"anstyle",
"windows-sys",
]
[[package]]
name = "backtrace"
version = "0.3.67"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "233d376d6d185f2a3093e58f283f60f880315b6c60075b01f36b3b85154564ca"
dependencies = [
"addr2line 0.19.0",
"cc",
"cfg-if",
"libc",
"miniz_oxide",
"object 0.30.3",
"rustc-demangle",
]
[[package]]
name = "bitflags"
version = "1.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bef38d45163c2f1dde094a7dfd33ccf595c92905c8f8f4fdc18d06fb1037718a"
[[package]]
name = "bitflags"
version = "2.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b4682ae6287fcf752ecaabbfcc7b6f9b72aa33933dc23a554d853aea8eea8635"
[[package]]
name = "byteorder"
version = "1.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "14c189c53d098945499cdfa7ecc63567cf3886b3332b312a5b4585d8d3a6a610"
[[package]]
name = "cc"
version = "1.0.79"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "50d30906286121d95be3d479533b458f87493b30a4b5f79a607db8f5d11aa91f"
[[package]]
name = "cfg-if"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd"
[[package]]
name = "clap"
version = "4.3.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c27cdf28c0f604ba3f512b0c9a409f8de8513e4816705deb0498b627e7c3a3fd"
dependencies = [
"clap_builder",
"clap_derive",
"once_cell",
]
[[package]]
name = "clap_builder"
version = "4.3.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "08a9f1ab5e9f01a9b81f202e8562eb9a10de70abf9eaeac1be465c28b75aa4aa"
dependencies = [
"anstream",
"anstyle",
"clap_lex",
"strsim",
"terminal_size",
]
[[package]]
name = "clap_derive"
version = "4.3.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "54a9bb5758fc5dfe728d1019941681eccaf0cf8a4189b692a0ee2f2ecf90a050"
dependencies = [
"heck",
"proc-macro2",
"quote",
"syn 2.0.15",
]
[[package]]
name = "clap_lex"
version = "0.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2da6da31387c7e4ef160ffab6d5e7f00c42626fe39aea70a7b0f1773f7dd6c1b"
[[package]]
name = "colorchoice"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "acbf1af155f9b9ef647e42cdc158db4b64a1b61f743629225fde6f3e0be2a7c7"
[[package]]
name = "compiler_builtins"
version = "0.1.91"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "571298a3cce7e2afbd3d61abb91a18667d5ab25993ec577a88ee8ac45f00cc3a"
[[package]]
name = "cpp_demangle"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2c76f98bdfc7f66172e6c7065f981ebb576ffc903fe4c0561d9f0c2509226dc6"
dependencies = [
"cfg-if",
]
[[package]]
name = "crc32fast"
version = "1.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b540bd8bc810d3885c6ea91e2018302f68baba2129ab3e88f32389ee9370880d"
dependencies = [
"cfg-if",
]
[[package]]
name = "errno"
version = "0.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6b30f669a7961ef1631673d2766cc92f52d64f7ef354d4fe0ddfd30ed52f0f4f"
dependencies = [
"errno-dragonfly",
"libc",
"windows-sys",
]
[[package]]
name = "errno-dragonfly"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "aa68f1b12764fab894d2755d2518754e71b4fd80ecfb822714a1206c2aab39bf"
dependencies = [
"cc",
"libc",
]
[[package]]
name = "fallible-iterator"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2acce4a10f12dc2fb14a218589d4f1f62ef011b2d0cc4b3cb1bba8e94da14649"
[[package]]
name = "findshlibs"
version = "0.10.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "40b9e59cd0f7e0806cca4be089683ecb6434e602038df21fe6bf6711b2f07f64"
dependencies = [
"cc",
"lazy_static",
"libc",
"winapi",
]
[[package]]
name = "flate2"
version = "1.0.25"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a8a2db397cb1c8772f31494cb8917e48cd1e64f0fa7efac59fbd741a0a8ce841"
dependencies = [
"crc32fast",
"miniz_oxide",
]
[[package]]
name = "gimli"
version = "0.27.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ad0a93d233ebf96623465aad4046a8d3aa4da22d4f4beba5388838c8a434bbb4"
[[package]]
name = "gimli"
version = "0.28.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6fb8d784f27acf97159b40fc4db5ecd8aa23b9ad5ef69cdd136d3bc80665f0c0"
dependencies = [
"compiler_builtins",
"fallible-iterator",
"rustc-std-workspace-alloc",
"rustc-std-workspace-core",
"stable_deref_trait",
]
[[package]]
name = "heck"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "95505c38b4572b2d910cecb0281560f54b440a19336cbbcb27bf6ce6adc6f5a8"
[[package]]
name = "hermit-abi"
version = "0.2.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ee512640fe35acbfb4bb779db6f0d80704c2cacfa2e39b601ef3e3f47d1ae4c7"
dependencies = [
"libc",
]
[[package]]
name = "hermit-abi"
version = "0.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "443144c8cdadd93ebf52ddb4056d257f5b52c04d3c804e657d19eb73fc33668b"
[[package]]
name = "io-lifetimes"
version = "1.0.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eae7b9aee968036d54dce06cebaefd919e4472e753296daccd6d344e3e2df0c2"
dependencies = [
"hermit-abi 0.3.2",
"libc",
"windows-sys",
]
[[package]]
name = "is-terminal"
version = "0.4.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cb0889898416213fab133e1d33a0e5858a48177452750691bde3666d0fdbaf8b"
dependencies = [
"hermit-abi 0.3.2",
"rustix 0.38.8",
"windows-sys",
]
[[package]]
name = "lazy_static"
version = "1.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646"
[[package]]
name = "libc"
version = "0.2.147"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b4668fb0ea861c1df094127ac5f1da3409a82116a4ba74fca2e58ef927159bb3"
[[package]]
name = "libtest-mimic"
version = "0.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6d8de370f98a6cb8a4606618e53e802f93b094ddec0f96988eaec2c27e6e9ce7"
dependencies = [
"clap",
"termcolor",
"threadpool",
]
[[package]]
name = "linux-raw-sys"
version = "0.3.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ef53942eb7bf7ff43a617b3e2c1c4a5ecf5944a7c1bc12d7ee39bbb15e5c1519"
[[package]]
name = "linux-raw-sys"
version = "0.4.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "57bcfdad1b858c2db7c38303a6d2ad4dfaf5eb53dfeb0910128b2c26d6158503"
[[package]]
name = "memchr"
version = "2.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2dffe52ecf27772e601905b7522cb4ef790d2cc203488bbd0e2fe85fcb74566d"
[[package]]
name = "memmap2"
version = "0.5.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "83faa42c0a078c393f6b29d5db232d8be22776a891f8f56e5284faee4a20b327"
dependencies = [
"libc",
]
[[package]]
name = "miniz_oxide"
version = "0.6.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b275950c28b37e794e8c55d88aeb5e139d0ce23fdbbeda68f8d7174abdf9e8fa"
dependencies = [
"adler",
]
[[package]]
name = "num_cpus"
version = "1.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0fac9e2da13b5eb447a6ce3d392f23a29d8694bff781bf03a16cd9ac8697593b"
dependencies = [
"hermit-abi 0.2.6",
"libc",
]
[[package]]
name = "object"
version = "0.30.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ea86265d3d3dcb6a27fc51bd29a4bf387fae9d2986b823079d4986af253eb439"
dependencies = [
"memchr",
]
[[package]]
name = "object"
version = "0.32.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "77ac5bbd07aea88c60a577a1ce218075ffd59208b2d7ca97adf9bfc5aeb21ebe"
dependencies = [
"flate2",
"memchr",
"ruzstd",
]
[[package]]
name = "once_cell"
version = "1.17.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b7e5500299e16ebb147ae15a00a942af264cf3688f47923b8fc2cd5858f23ad3"
[[package]]
name = "proc-macro2"
version = "1.0.56"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2b63bdb0cd06f1f4dedf69b254734f9b45af66e4a031e42a7480257d9898b435"
dependencies = [
"unicode-ident",
]
[[package]]
name = "quote"
version = "1.0.26"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4424af4bf778aae2051a77b60283332f386554255d722233d09fbfc7e30da2fc"
dependencies = [
"proc-macro2",
]
[[package]]
name = "rustc-demangle"
version = "0.1.22"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d4a36c42d1873f9a77c53bde094f9664d9891bc604a45b4798fd2c389ed12e5b"
[[package]]
name = "rustc-std-workspace-alloc"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ff66d57013a5686e1917ed6a025d54dd591fcda71a41fe07edf4d16726aefa86"
[[package]]
name = "rustc-std-workspace-core"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1956f5517128a2b6f23ab2dadf1a976f4f5b27962e7724c2bf3d45e539ec098c"
[[package]]
name = "rustix"
version = "0.37.23"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4d69718bf81c6127a49dc64e44a742e8bb9213c0ff8869a22c308f84c1d4ab06"
dependencies = [
"bitflags 1.3.2",
"errno",
"io-lifetimes",
"libc",
"linux-raw-sys 0.3.8",
"windows-sys",
]
[[package]]
name = "rustix"
version = "0.38.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "19ed4fa021d81c8392ce04db050a3da9a60299050b7ae1cf482d862b54a7218f"
dependencies = [
"bitflags 2.4.0",
"errno",
"libc",
"linux-raw-sys 0.4.5",
"windows-sys",
]
[[package]]
name = "ruzstd"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ac3ffab8f9715a0d455df4bbb9d21e91135aab3cd3ca187af0cd0c3c3f868fdc"
dependencies = [
"byteorder",
"thiserror-core",
"twox-hash",
]
[[package]]
name = "smallvec"
version = "1.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a507befe795404456341dfab10cef66ead4c041f62b8b11bbb92bffe5d0953e0"
[[package]]
name = "stable_deref_trait"
version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a8f112729512f8e442d81f95a8a7ddf2b7c6b8a1a6f509a95864142b30cab2d3"
[[package]]
name = "static_assertions"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a2eb9349b6444b326872e140eb1cf5e7c522154d69e7a0ffb0fb81c06b37543f"
[[package]]
name = "strsim"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "73473c0e59e6d5812c5dfe2a064a6444949f089e20eec9a2e5506596494e4623"
[[package]]
name = "syn"
version = "1.0.109"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "72b64191b275b66ffe2469e8af2c1cfe3bafa67b529ead792a6d0160888b4237"
dependencies = [
"proc-macro2",
"quote",
"unicode-ident",
]
[[package]]
name = "syn"
version = "2.0.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a34fcf3e8b60f57e6a14301a2e916d323af98b0ea63c599441eec8558660c822"
dependencies = [
"proc-macro2",
"quote",
"unicode-ident",
]
[[package]]
name = "termcolor"
version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "be55cf8942feac5c765c2c993422806843c9a9a45d4d5c407ad6dd2ea95eb9b6"
dependencies = [
"winapi-util",
]
[[package]]
name = "terminal_size"
version = "0.2.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8e6bf6f19e9f8ed8d4048dc22981458ebcf406d67e94cd422e5ecd73d63b3237"
dependencies = [
"rustix 0.37.23",
"windows-sys",
]
[[package]]
name = "thiserror-core"
version = "1.0.38"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0d97345f6437bb2004cd58819d8a9ef8e36cdd7661c2abc4bbde0a7c40d9f497"
dependencies = [
"thiserror-core-impl",
]
[[package]]
name = "thiserror-core-impl"
version = "1.0.38"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "10ac1c5050e43014d16b2f94d0d2ce79e65ffdd8b38d8048f9c8f6a8a6da62ac"
dependencies = [
"proc-macro2",
"quote",
"syn 1.0.109",
]
[[package]]
name = "threadpool"
version = "1.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d050e60b33d41c19108b32cea32164033a9013fe3b46cbd4457559bfbf77afaa"
dependencies = [
"num_cpus",
]
[[package]]
name = "twox-hash"
version = "1.6.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "97fee6b57c6a41524a810daee9286c02d7752c4253064d0b05472833a438f675"
dependencies = [
"cfg-if",
"static_assertions",
]
[[package]]
name = "typed-arena"
version = "2.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6af6ae20167a9ece4bcb41af5b80f8a1f1df981f6391189ce00fd257af04126a"
[[package]]
name = "unicode-ident"
version = "1.0.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e5464a87b239f13a63a501f2701565754bae92d243d4bb7eb12f6d57d2269bf4"
[[package]]
name = "utf8parse"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "711b9620af191e0cdc7468a8d14e709c3dcdb115b36f838e601583af800a370a"
[[package]]
name = "winapi"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5c839a674fcd7a98952e593242ea400abe93992746761e38641405d28b00f419"
dependencies = [
"winapi-i686-pc-windows-gnu",
"winapi-x86_64-pc-windows-gnu",
]
[[package]]
name = "winapi-i686-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
[[package]]
name = "winapi-util"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "70ec6ce85bb158151cae5e5c87f95a8e97d2c0c4b001223f33a334e3ce5de178"
dependencies = [
"winapi",
]
[[package]]
name = "winapi-x86_64-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
[[package]]
name = "windows-sys"
version = "0.48.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "677d2418bec65e3338edb076e806bc1ec15693c5d0104683f2efe857f61056a9"
dependencies = [
"windows-targets",
]
[[package]]
name = "windows-targets"
version = "0.48.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "05d4b17490f70499f20b9e791dcf6a299785ce8af4d709018206dc5b4953e95f"
dependencies = [
"windows_aarch64_gnullvm",
"windows_aarch64_msvc",
"windows_i686_gnu",
"windows_i686_msvc",
"windows_x86_64_gnu",
"windows_x86_64_gnullvm",
"windows_x86_64_msvc",
]
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.48.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "91ae572e1b79dba883e0d315474df7305d12f569b400fcf90581b06062f7e1bc"
[[package]]
name = "windows_aarch64_msvc"
version = "0.48.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b2ef27e0d7bdfcfc7b868b317c1d32c641a6fe4629c171b8928c7b08d98d7cf3"
[[package]]
name = "windows_i686_gnu"
version = "0.48.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "622a1962a7db830d6fd0a69683c80a18fda201879f0f447f065a3b7467daa241"
[[package]]
name = "windows_i686_msvc"
version = "0.48.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4542c6e364ce21bf45d69fdd2a8e455fa38d316158cfd43b3ac1c5b1b19f8e00"
[[package]]
name = "windows_x86_64_gnu"
version = "0.48.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ca2b8a661f7628cbd23440e50b05d705db3686f894fc9580820623656af974b1"
[[package]]
name = "windows_x86_64_gnullvm"
version = "0.48.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7896dbc1f41e08872e9d5e8f8baa8fdd2677f29468c4e156210174edc7f7b953"
[[package]]
name = "windows_x86_64_msvc"
version = "0.48.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1a515f5799fe4961cb532f983ce2b23082366b898e52ffbce459c86f67c8378a"

View File

@@ -1,147 +0,0 @@
# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO
#
# When uploading crates to the registry Cargo will automatically
# "normalize" Cargo.toml files for maximal compatibility
# with all versions of Cargo and also rewrite `path` dependencies
# to registry (e.g., crates.io) dependencies.
#
# If you are reading this file be aware that the original Cargo.toml
# will likely look very different (and much more reasonable).
# See Cargo.toml.orig for the original contents.
[package]
edition = "2018"
rust-version = "1.65"
name = "addr2line"
version = "0.21.0"
exclude = [
"/benches/*",
"/fixtures/*",
".github",
]
description = "A cross-platform symbolication library written in Rust, using `gimli`"
documentation = "https://docs.rs/addr2line"
readme = "./README.md"
keywords = [
"DWARF",
"debug",
"elf",
"symbolicate",
"atos",
]
categories = ["development-tools::debugging"]
license = "Apache-2.0 OR MIT"
repository = "https://github.com/gimli-rs/addr2line"
[profile.bench]
codegen-units = 1
debug = true
[profile.release]
debug = true
[[example]]
name = "addr2line"
required-features = ["default"]
[[test]]
name = "output_equivalence"
harness = false
required-features = ["default"]
[[test]]
name = "correctness"
required-features = ["default"]
[[test]]
name = "parse"
required-features = ["std-object"]
[dependencies.alloc]
version = "1.0.0"
optional = true
package = "rustc-std-workspace-alloc"
[dependencies.compiler_builtins]
version = "0.1.2"
optional = true
[dependencies.core]
version = "1.0.0"
optional = true
package = "rustc-std-workspace-core"
[dependencies.cpp_demangle]
version = "0.4"
features = ["alloc"]
optional = true
default-features = false
[dependencies.fallible-iterator]
version = "0.3.0"
optional = true
default-features = false
[dependencies.gimli]
version = "0.28.0"
features = ["read"]
default-features = false
[dependencies.memmap2]
version = "0.5.5"
optional = true
[dependencies.object]
version = "0.32.0"
features = ["read"]
optional = true
default-features = false
[dependencies.rustc-demangle]
version = "0.1"
optional = true
[dependencies.smallvec]
version = "1"
optional = true
default-features = false
[dev-dependencies.backtrace]
version = "0.3.13"
[dev-dependencies.clap]
version = "4.3.21"
features = ["wrap_help"]
[dev-dependencies.findshlibs]
version = "0.10"
[dev-dependencies.libtest-mimic]
version = "0.6.1"
[dev-dependencies.typed-arena]
version = "2"
[features]
default = [
"rustc-demangle",
"cpp_demangle",
"std-object",
"fallible-iterator",
"smallvec",
"memmap2",
]
rustc-dep-of-std = [
"core",
"alloc",
"compiler_builtins",
"gimli/rustc-dep-of-std",
]
std = ["gimli/std"]
std-object = [
"std",
"object",
"object/std",
"object/compression",
"gimli/endian-reader",
]

View File

@@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,25 +0,0 @@
Copyright (c) 2016-2018 The gimli Developers
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.

View File

@@ -1,48 +0,0 @@
# addr2line
[![](https://img.shields.io/crates/v/addr2line.svg)](https://crates.io/crates/addr2line)
[![](https://img.shields.io/docsrs/addr2line.svg)](https://docs.rs/addr2line)
[![Coverage Status](https://coveralls.io/repos/github/gimli-rs/addr2line/badge.svg?branch=master)](https://coveralls.io/github/gimli-rs/addr2line?branch=master)
A cross-platform library for retrieving per-address debug information
from files with DWARF debug information.
`addr2line` uses [`gimli`](https://github.com/gimli-rs/gimli) to parse
the debug information, and exposes an interface for finding
the source file, line number, and wrapping function for instruction
addresses within the target program. These lookups can either be
performed programmatically through `Context::find_location` and
`Context::find_frames`, or via the included example binary,
`addr2line` (named and modelled after the equivalent utility from
[GNU binutils](https://sourceware.org/binutils/docs/binutils/addr2line.html)).
# Quickstart
- Add the [`addr2line` crate](https://crates.io/crates/addr2line) to your `Cargo.toml`
- Load the file and parse it with [`addr2line::object::read::File::parse`](https://docs.rs/object/*/object/read/struct.File.html#method.parse)
- Pass the parsed file to [`addr2line::Context::new` ](https://docs.rs/addr2line/*/addr2line/struct.Context.html#method.new)
- Use [`addr2line::Context::find_location`](https://docs.rs/addr2line/*/addr2line/struct.Context.html#method.find_location)
or [`addr2line::Context::find_frames`](https://docs.rs/addr2line/*/addr2line/struct.Context.html#method.find_frames)
to look up debug information for an address
# Performance
`addr2line` optimizes for speed over memory by caching parsed information.
The DWARF information is parsed lazily where possible.
The library aims to perform similarly to equivalent existing tools such
as `addr2line` from binutils, `eu-addr2line` from elfutils, and
`llvm-symbolize` from the llvm project, and in the past some benchmarking
was done that indicates a comparable performance.
## License
Licensed under either of
* Apache License, Version 2.0 ([`LICENSE-APACHE`](./LICENSE-APACHE) or https://www.apache.org/licenses/LICENSE-2.0)
* MIT license ([`LICENSE-MIT`](./LICENSE-MIT) or https://opensource.org/licenses/MIT)
at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in the work by you, as defined in the Apache-2.0 license, shall be
dual licensed as above, without any additional terms or conditions.

View File

@@ -1,23 +0,0 @@
v <- read.table(file("stdin"))
t <- data.frame(prog=v[,1], funcs=(v[,2]=="func"), time=v[,3], mem=v[,4], stringsAsFactors=FALSE)
t$prog <- as.character(t$prog)
t$prog[t$prog == "master"] <- "gimli-rs/addr2line"
t$funcs[t$funcs == TRUE] <- "With functions"
t$funcs[t$funcs == FALSE] <- "File/line only"
t$mem = t$mem / 1024.0
library(ggplot2)
p <- ggplot(data=t, aes(x=prog, y=time, fill=prog))
p <- p + geom_bar(stat = "identity")
p <- p + facet_wrap(~ funcs)
p <- p + theme(axis.title.x=element_blank(), axis.text.x=element_blank(), axis.ticks.x=element_blank())
p <- p + ylab("time (s)") + ggtitle("addr2line runtime")
ggsave('time.png',plot=p,width=10,height=6)
p <- ggplot(data=t, aes(x=prog, y=mem, fill=prog))
p <- p + geom_bar(stat = "identity")
p <- p + facet_wrap(~ funcs)
p <- p + theme(axis.title.x=element_blank(), axis.text.x=element_blank(), axis.ticks.x=element_blank())
p <- p + ylab("memory (kB)") + ggtitle("addr2line memory usage")
ggsave('memory.png',plot=p,width=10,height=6)

View File

@@ -1,112 +0,0 @@
#!/bin/bash
if [[ $# -le 1 ]]; then
echo "Usage: $0 <executable> [<addresses>] REFS..."
exit 1
fi
target="$1"
shift
addresses=""
if [[ -e "$1" ]]; then
addresses="$1"
shift
fi
# path to "us"
# readlink -f, but more portable:
dirname=$(perl -e 'use Cwd "abs_path";print abs_path(shift)' "$(dirname "$0")")
# https://stackoverflow.com/a/2358432/472927
{
# compile all refs
pushd "$dirname" > /dev/null
# if the user has some local changes, preserve them
nstashed=$(git stash list | wc -l)
echo "==> Stashing any local modifications"
git stash --keep-index > /dev/null
popstash() {
# https://stackoverflow.com/q/24520791/472927
if [[ "$(git stash list | wc -l)" -ne "$nstashed" ]]; then
echo "==> Restoring stashed state"
git stash pop > /dev/null
fi
}
# if the user has added stuff to the index, abort
if ! git diff-index --quiet HEAD --; then
echo "Refusing to overwrite outstanding git changes"
popstash
exit 2
fi
current=$(git symbolic-ref --short HEAD)
for ref in "$@"; do
echo "==> Compiling $ref"
git checkout -q "$ref"
commit=$(git rev-parse HEAD)
fn="target/release/addr2line-$commit"
if [[ ! -e "$fn" ]]; then
cargo build --release --example addr2line
cp target/release/examples/addr2line "$fn"
fi
if [[ "$ref" != "$commit" ]]; then
ln -sfn "addr2line-$commit" target/release/addr2line-"$ref"
fi
done
git checkout -q "$current"
popstash
popd > /dev/null
# get us some addresses to look up
if [[ -z "$addresses" ]]; then
echo "==> Looking for benchmarking addresses (this may take a while)"
addresses=$(mktemp tmp.XXXXXXXXXX)
objdump -C -x --disassemble -l "$target" \
| grep -P '0[048]:' \
| awk '{print $1}' \
| sed 's/:$//' \
> "$addresses"
echo " -> Addresses stored in $addresses; you should re-use it next time"
fi
run() {
func="$1"
name="$2"
cmd="$3"
args="$4"
printf "%s\t%s\t" "$name" "$func"
if [[ "$cmd" =~ llvm-symbolizer ]]; then
/usr/bin/time -f '%e\t%M' "$cmd" $args -obj="$target" < "$addresses" 2>&1 >/dev/null
else
/usr/bin/time -f '%e\t%M' "$cmd" $args -e "$target" < "$addresses" 2>&1 >/dev/null
fi
}
# run without functions
log1=$(mktemp tmp.XXXXXXXXXX)
echo "==> Benchmarking"
run nofunc binutils addr2line >> "$log1"
#run nofunc elfutils eu-addr2line >> "$log1"
run nofunc llvm-sym llvm-symbolizer -functions=none >> "$log1"
for ref in "$@"; do
run nofunc "$ref" "$dirname/target/release/addr2line-$ref" >> "$log1"
done
cat "$log1" | column -t
# run with functions
log2=$(mktemp tmp.XXXXXXXXXX)
echo "==> Benchmarking with -f"
run func binutils addr2line "-f -i" >> "$log2"
#run func elfutils eu-addr2line "-f -i" >> "$log2"
run func llvm-sym llvm-symbolizer "-functions=linkage -demangle=0" >> "$log2"
for ref in "$@"; do
run func "$ref" "$dirname/target/release/addr2line-$ref" "-f -i" >> "$log2"
done
cat "$log2" | column -t
cat "$log2" >> "$log1"; rm "$log2"
echo "==> Plotting"
Rscript --no-readline --no-restore --no-save "$dirname/bench.plot.r" < "$log1"
echo "==> Cleaning up"
rm "$log1"
exit 0
}

View File

@@ -1,5 +0,0 @@
#!/bin/sh
# Run tarpaulin and pycobertura to generate coverage.html.
cargo tarpaulin --skip-clean --out Xml
pycobertura show --format html --output coverage.html cobertura.xml

View File

@@ -1,317 +0,0 @@
use std::borrow::Cow;
use std::fs::File;
use std::io::{BufRead, Lines, StdinLock, Write};
use std::path::{Path, PathBuf};
use clap::{Arg, ArgAction, Command};
use fallible_iterator::FallibleIterator;
use object::{Object, ObjectSection, SymbolMap, SymbolMapName};
use typed_arena::Arena;
use addr2line::{Context, Location};
fn parse_uint_from_hex_string(string: &str) -> Option<u64> {
if string.len() > 2 && string.starts_with("0x") {
u64::from_str_radix(&string[2..], 16).ok()
} else {
u64::from_str_radix(string, 16).ok()
}
}
enum Addrs<'a> {
Args(clap::parser::ValuesRef<'a, String>),
Stdin(Lines<StdinLock<'a>>),
}
impl<'a> Iterator for Addrs<'a> {
type Item = Option<u64>;
fn next(&mut self) -> Option<Option<u64>> {
let text = match *self {
Addrs::Args(ref mut vals) => vals.next().map(Cow::from),
Addrs::Stdin(ref mut lines) => lines.next().map(Result::unwrap).map(Cow::from),
};
text.as_ref()
.map(Cow::as_ref)
.map(parse_uint_from_hex_string)
}
}
fn print_loc(loc: Option<&Location<'_>>, basenames: bool, llvm: bool) {
if let Some(loc) = loc {
if let Some(ref file) = loc.file.as_ref() {
let path = if basenames {
Path::new(Path::new(file).file_name().unwrap())
} else {
Path::new(file)
};
print!("{}:", path.display());
} else {
print!("??:");
}
if llvm {
print!("{}:{}", loc.line.unwrap_or(0), loc.column.unwrap_or(0));
} else if let Some(line) = loc.line {
print!("{}", line);
} else {
print!("?");
}
println!();
} else if llvm {
println!("??:0:0");
} else {
println!("??:0");
}
}
fn print_function(name: Option<&str>, language: Option<gimli::DwLang>, demangle: bool) {
if let Some(name) = name {
if demangle {
print!("{}", addr2line::demangle_auto(Cow::from(name), language));
} else {
print!("{}", name);
}
} else {
print!("??");
}
}
fn load_file_section<'input, 'arena, Endian: gimli::Endianity>(
id: gimli::SectionId,
file: &object::File<'input>,
endian: Endian,
arena_data: &'arena Arena<Cow<'input, [u8]>>,
) -> Result<gimli::EndianSlice<'arena, Endian>, ()> {
// TODO: Unify with dwarfdump.rs in gimli.
let name = id.name();
match file.section_by_name(name) {
Some(section) => match section.uncompressed_data().unwrap() {
Cow::Borrowed(b) => Ok(gimli::EndianSlice::new(b, endian)),
Cow::Owned(b) => Ok(gimli::EndianSlice::new(arena_data.alloc(b.into()), endian)),
},
None => Ok(gimli::EndianSlice::new(&[][..], endian)),
}
}
fn find_name_from_symbols<'a>(
symbols: &'a SymbolMap<SymbolMapName<'_>>,
probe: u64,
) -> Option<&'a str> {
symbols.get(probe).map(|x| x.name())
}
struct Options<'a> {
do_functions: bool,
do_inlines: bool,
pretty: bool,
print_addrs: bool,
basenames: bool,
demangle: bool,
llvm: bool,
exe: &'a PathBuf,
sup: Option<&'a PathBuf>,
}
fn main() {
let matches = Command::new("addr2line")
.version(env!("CARGO_PKG_VERSION"))
.about("A fast addr2line Rust port")
.max_term_width(100)
.args(&[
Arg::new("exe")
.short('e')
.long("exe")
.value_name("filename")
.value_parser(clap::value_parser!(PathBuf))
.help(
"Specify the name of the executable for which addresses should be translated.",
)
.required(true),
Arg::new("sup")
.long("sup")
.value_name("filename")
.value_parser(clap::value_parser!(PathBuf))
.help("Path to supplementary object file."),
Arg::new("functions")
.short('f')
.long("functions")
.action(ArgAction::SetTrue)
.help("Display function names as well as file and line number information."),
Arg::new("pretty").short('p').long("pretty-print")
.action(ArgAction::SetTrue)
.help(
"Make the output more human friendly: each location are printed on one line.",
),
Arg::new("inlines").short('i').long("inlines")
.action(ArgAction::SetTrue)
.help(
"If the address belongs to a function that was inlined, the source information for \
all enclosing scopes back to the first non-inlined function will also be printed.",
),
Arg::new("addresses").short('a').long("addresses")
.action(ArgAction::SetTrue)
.help(
"Display the address before the function name, file and line number information.",
),
Arg::new("basenames")
.short('s')
.long("basenames")
.action(ArgAction::SetTrue)
.help("Display only the base of each file name."),
Arg::new("demangle").short('C').long("demangle")
.action(ArgAction::SetTrue)
.help(
"Demangle function names. \
Specifying a specific demangling style (like GNU addr2line) is not supported. \
(TODO)"
),
Arg::new("llvm")
.long("llvm")
.action(ArgAction::SetTrue)
.help("Display output in the same format as llvm-symbolizer."),
Arg::new("addrs")
.action(ArgAction::Append)
.help("Addresses to use instead of reading from stdin."),
])
.get_matches();
let arena_data = Arena::new();
let opts = Options {
do_functions: matches.get_flag("functions"),
do_inlines: matches.get_flag("inlines"),
pretty: matches.get_flag("pretty"),
print_addrs: matches.get_flag("addresses"),
basenames: matches.get_flag("basenames"),
demangle: matches.get_flag("demangle"),
llvm: matches.get_flag("llvm"),
exe: matches.get_one::<PathBuf>("exe").unwrap(),
sup: matches.get_one::<PathBuf>("sup"),
};
let file = File::open(opts.exe).unwrap();
let map = unsafe { memmap2::Mmap::map(&file).unwrap() };
let object = &object::File::parse(&*map).unwrap();
let endian = if object.is_little_endian() {
gimli::RunTimeEndian::Little
} else {
gimli::RunTimeEndian::Big
};
let mut load_section = |id: gimli::SectionId| -> Result<_, _> {
load_file_section(id, object, endian, &arena_data)
};
let sup_map;
let sup_object = if let Some(sup_path) = opts.sup {
let sup_file = File::open(sup_path).unwrap();
sup_map = unsafe { memmap2::Mmap::map(&sup_file).unwrap() };
Some(object::File::parse(&*sup_map).unwrap())
} else {
None
};
let symbols = object.symbol_map();
let mut dwarf = gimli::Dwarf::load(&mut load_section).unwrap();
if let Some(ref sup_object) = sup_object {
let mut load_sup_section = |id: gimli::SectionId| -> Result<_, _> {
load_file_section(id, sup_object, endian, &arena_data)
};
dwarf.load_sup(&mut load_sup_section).unwrap();
}
let mut split_dwarf_loader = addr2line::builtin_split_dwarf_loader::SplitDwarfLoader::new(
|data, endian| {
gimli::EndianSlice::new(arena_data.alloc(Cow::Owned(data.into_owned())), endian)
},
Some(opts.exe.clone()),
);
let ctx = Context::from_dwarf(dwarf).unwrap();
let stdin = std::io::stdin();
let addrs = matches
.get_many::<String>("addrs")
.map(Addrs::Args)
.unwrap_or_else(|| Addrs::Stdin(stdin.lock().lines()));
for probe in addrs {
if opts.print_addrs {
let addr = probe.unwrap_or(0);
if opts.llvm {
print!("0x{:x}", addr);
} else {
print!("0x{:016x}", addr);
}
if opts.pretty {
print!(": ");
} else {
println!();
}
}
if opts.do_functions || opts.do_inlines {
let mut printed_anything = false;
if let Some(probe) = probe {
let frames = ctx.find_frames(probe);
let frames = split_dwarf_loader.run(frames).unwrap();
let mut frames = frames.enumerate();
while let Some((i, frame)) = frames.next().unwrap() {
if opts.pretty && i != 0 {
print!(" (inlined by) ");
}
if opts.do_functions {
if let Some(func) = frame.function {
print_function(
func.raw_name().ok().as_ref().map(AsRef::as_ref),
func.language,
opts.demangle,
);
} else {
let name = find_name_from_symbols(&symbols, probe);
print_function(name, None, opts.demangle);
}
if opts.pretty {
print!(" at ");
} else {
println!();
}
}
print_loc(frame.location.as_ref(), opts.basenames, opts.llvm);
printed_anything = true;
if !opts.do_inlines {
break;
}
}
}
if !printed_anything {
if opts.do_functions {
let name = probe.and_then(|probe| find_name_from_symbols(&symbols, probe));
print_function(name, None, opts.demangle);
if opts.pretty {
print!(" at ");
} else {
println!();
}
}
print_loc(None, opts.basenames, opts.llvm);
}
} else {
let loc = probe.and_then(|probe| ctx.find_location(probe).unwrap());
print_loc(loc.as_ref(), opts.basenames, opts.llvm);
}
if opts.llvm {
println!();
}
std::io::stdout().flush().unwrap();
}
}

View File

@@ -1 +0,0 @@

View File

@@ -1,164 +0,0 @@
use alloc::borrow::Cow;
use alloc::sync::Arc;
use std::fs::File;
use std::path::PathBuf;
use object::Object;
use crate::{LookupContinuation, LookupResult};
#[cfg(unix)]
fn convert_path<R: gimli::Reader<Endian = gimli::RunTimeEndian>>(
r: &R,
) -> Result<PathBuf, gimli::Error> {
use std::ffi::OsStr;
use std::os::unix::ffi::OsStrExt;
let bytes = r.to_slice()?;
let s = OsStr::from_bytes(&bytes);
Ok(PathBuf::from(s))
}
#[cfg(not(unix))]
fn convert_path<R: gimli::Reader<Endian = gimli::RunTimeEndian>>(
r: &R,
) -> Result<PathBuf, gimli::Error> {
let bytes = r.to_slice()?;
let s = std::str::from_utf8(&bytes).map_err(|_| gimli::Error::BadUtf8)?;
Ok(PathBuf::from(s))
}
fn load_section<'data: 'file, 'file, O, R, F>(
id: gimli::SectionId,
file: &'file O,
endian: R::Endian,
loader: &mut F,
) -> Result<R, gimli::Error>
where
O: object::Object<'data, 'file>,
R: gimli::Reader<Endian = gimli::RunTimeEndian>,
F: FnMut(Cow<'data, [u8]>, R::Endian) -> R,
{
use object::ObjectSection;
let data = id
.dwo_name()
.and_then(|dwo_name| {
file.section_by_name(dwo_name)
.and_then(|section| section.uncompressed_data().ok())
})
.unwrap_or(Cow::Borrowed(&[]));
Ok(loader(data, endian))
}
/// A simple builtin split DWARF loader.
pub struct SplitDwarfLoader<R, F>
where
R: gimli::Reader<Endian = gimli::RunTimeEndian>,
F: FnMut(Cow<'_, [u8]>, R::Endian) -> R,
{
loader: F,
dwarf_package: Option<gimli::DwarfPackage<R>>,
}
impl<R, F> SplitDwarfLoader<R, F>
where
R: gimli::Reader<Endian = gimli::RunTimeEndian>,
F: FnMut(Cow<'_, [u8]>, R::Endian) -> R,
{
fn load_dwarf_package(loader: &mut F, path: Option<PathBuf>) -> Option<gimli::DwarfPackage<R>> {
let mut path = path.map(Ok).unwrap_or_else(std::env::current_exe).ok()?;
let dwp_extension = path
.extension()
.map(|previous_extension| {
let mut previous_extension = previous_extension.to_os_string();
previous_extension.push(".dwp");
previous_extension
})
.unwrap_or_else(|| "dwp".into());
path.set_extension(dwp_extension);
let file = File::open(&path).ok()?;
let map = unsafe { memmap2::Mmap::map(&file).ok()? };
let dwp = object::File::parse(&*map).ok()?;
let endian = if dwp.is_little_endian() {
gimli::RunTimeEndian::Little
} else {
gimli::RunTimeEndian::Big
};
let empty = loader(Cow::Borrowed(&[]), endian);
gimli::DwarfPackage::load(
|section_id| load_section(section_id, &dwp, endian, loader),
empty,
)
.ok()
}
/// Create a new split DWARF loader.
pub fn new(mut loader: F, path: Option<PathBuf>) -> SplitDwarfLoader<R, F> {
let dwarf_package = SplitDwarfLoader::load_dwarf_package(&mut loader, path);
SplitDwarfLoader {
loader,
dwarf_package,
}
}
/// Run the provided `LookupResult` to completion, loading any necessary
/// split DWARF along the way.
pub fn run<L>(&mut self, mut l: LookupResult<L>) -> L::Output
where
L: LookupContinuation<Buf = R>,
{
loop {
let (load, continuation) = match l {
LookupResult::Output(output) => break output,
LookupResult::Load { load, continuation } => (load, continuation),
};
let mut r: Option<Arc<gimli::Dwarf<_>>> = None;
if let Some(dwp) = self.dwarf_package.as_ref() {
if let Ok(Some(cu)) = dwp.find_cu(load.dwo_id, &load.parent) {
r = Some(Arc::new(cu));
}
}
if r.is_none() {
let mut path = PathBuf::new();
if let Some(p) = load.comp_dir.as_ref() {
if let Ok(p) = convert_path(p) {
path.push(p);
}
}
if let Some(p) = load.path.as_ref() {
if let Ok(p) = convert_path(p) {
path.push(p);
}
}
if let Ok(file) = File::open(&path) {
if let Ok(map) = unsafe { memmap2::Mmap::map(&file) } {
if let Ok(file) = object::File::parse(&*map) {
let endian = if file.is_little_endian() {
gimli::RunTimeEndian::Little
} else {
gimli::RunTimeEndian::Big
};
r = gimli::Dwarf::load(|id| {
load_section(id, &file, endian, &mut self.loader)
})
.ok()
.map(|mut dwo_dwarf| {
dwo_dwarf.make_dwo(&load.parent);
Arc::new(dwo_dwarf)
});
}
}
}
}
l = continuation.resume(r);
}
}
}

View File

@@ -1,555 +0,0 @@
use alloc::boxed::Box;
use alloc::vec::Vec;
use core::cmp::Ordering;
use core::iter;
use crate::lazy::LazyCell;
use crate::maybe_small;
use crate::{Context, DebugFile, Error, RangeAttributes};
pub(crate) struct Functions<R: gimli::Reader> {
/// List of all `DW_TAG_subprogram` details in the unit.
pub(crate) functions: Box<
[(
gimli::UnitOffset<R::Offset>,
LazyCell<Result<Function<R>, Error>>,
)],
>,
/// List of `DW_TAG_subprogram` address ranges in the unit.
pub(crate) addresses: Box<[FunctionAddress]>,
}
/// A single address range for a function.
///
/// It is possible for a function to have multiple address ranges; this
/// is handled by having multiple `FunctionAddress` entries with the same
/// `function` field.
pub(crate) struct FunctionAddress {
range: gimli::Range,
/// An index into `Functions::functions`.
pub(crate) function: usize,
}
pub(crate) struct Function<R: gimli::Reader> {
pub(crate) dw_die_offset: gimli::UnitOffset<R::Offset>,
pub(crate) name: Option<R>,
/// List of all `DW_TAG_inlined_subroutine` details in this function.
inlined_functions: Box<[InlinedFunction<R>]>,
/// List of `DW_TAG_inlined_subroutine` address ranges in this function.
inlined_addresses: Box<[InlinedFunctionAddress]>,
}
pub(crate) struct InlinedFunctionAddress {
range: gimli::Range,
call_depth: usize,
/// An index into `Function::inlined_functions`.
function: usize,
}
pub(crate) struct InlinedFunction<R: gimli::Reader> {
pub(crate) dw_die_offset: gimli::UnitOffset<R::Offset>,
pub(crate) name: Option<R>,
pub(crate) call_file: Option<u64>,
pub(crate) call_line: u32,
pub(crate) call_column: u32,
}
impl<R: gimli::Reader> Functions<R> {
pub(crate) fn parse(
unit: &gimli::Unit<R>,
sections: &gimli::Dwarf<R>,
) -> Result<Functions<R>, Error> {
let mut functions = Vec::new();
let mut addresses = Vec::new();
let mut entries = unit.entries_raw(None)?;
while !entries.is_empty() {
let dw_die_offset = entries.next_offset();
if let Some(abbrev) = entries.read_abbreviation()? {
if abbrev.tag() == gimli::DW_TAG_subprogram {
let mut ranges = RangeAttributes::default();
for spec in abbrev.attributes() {
match entries.read_attribute(*spec) {
Ok(ref attr) => {
match attr.name() {
gimli::DW_AT_low_pc => match attr.value() {
gimli::AttributeValue::Addr(val) => {
ranges.low_pc = Some(val)
}
gimli::AttributeValue::DebugAddrIndex(index) => {
ranges.low_pc = Some(sections.address(unit, index)?);
}
_ => {}
},
gimli::DW_AT_high_pc => match attr.value() {
gimli::AttributeValue::Addr(val) => {
ranges.high_pc = Some(val)
}
gimli::AttributeValue::DebugAddrIndex(index) => {
ranges.high_pc = Some(sections.address(unit, index)?);
}
gimli::AttributeValue::Udata(val) => {
ranges.size = Some(val)
}
_ => {}
},
gimli::DW_AT_ranges => {
ranges.ranges_offset =
sections.attr_ranges_offset(unit, attr.value())?;
}
_ => {}
};
}
Err(e) => return Err(e),
}
}
let function_index = functions.len();
if ranges.for_each_range(sections, unit, |range| {
addresses.push(FunctionAddress {
range,
function: function_index,
});
})? {
functions.push((dw_die_offset, LazyCell::new()));
}
} else {
entries.skip_attributes(abbrev.attributes())?;
}
}
}
// The binary search requires the addresses to be sorted.
//
// It also requires them to be non-overlapping. In practice, overlapping
// function ranges are unlikely, so we don't try to handle that yet.
//
// It's possible for multiple functions to have the same address range if the
// compiler can detect and remove functions with identical code. In that case
// we'll nondeterministically return one of them.
addresses.sort_by_key(|x| x.range.begin);
Ok(Functions {
functions: functions.into_boxed_slice(),
addresses: addresses.into_boxed_slice(),
})
}
pub(crate) fn find_address(&self, probe: u64) -> Option<usize> {
self.addresses
.binary_search_by(|address| {
if probe < address.range.begin {
Ordering::Greater
} else if probe >= address.range.end {
Ordering::Less
} else {
Ordering::Equal
}
})
.ok()
}
pub(crate) fn parse_inlined_functions(
&self,
file: DebugFile,
unit: &gimli::Unit<R>,
ctx: &Context<R>,
sections: &gimli::Dwarf<R>,
) -> Result<(), Error> {
for function in &*self.functions {
function
.1
.borrow_with(|| Function::parse(function.0, file, unit, ctx, sections))
.as_ref()
.map_err(Error::clone)?;
}
Ok(())
}
}
impl<R: gimli::Reader> Function<R> {
pub(crate) fn parse(
dw_die_offset: gimli::UnitOffset<R::Offset>,
file: DebugFile,
unit: &gimli::Unit<R>,
ctx: &Context<R>,
sections: &gimli::Dwarf<R>,
) -> Result<Self, Error> {
let mut entries = unit.entries_raw(Some(dw_die_offset))?;
let depth = entries.next_depth();
let abbrev = entries.read_abbreviation()?.unwrap();
debug_assert_eq!(abbrev.tag(), gimli::DW_TAG_subprogram);
let mut name = None;
for spec in abbrev.attributes() {
match entries.read_attribute(*spec) {
Ok(ref attr) => {
match attr.name() {
gimli::DW_AT_linkage_name | gimli::DW_AT_MIPS_linkage_name => {
if let Ok(val) = sections.attr_string(unit, attr.value()) {
name = Some(val);
}
}
gimli::DW_AT_name => {
if name.is_none() {
name = sections.attr_string(unit, attr.value()).ok();
}
}
gimli::DW_AT_abstract_origin | gimli::DW_AT_specification => {
if name.is_none() {
name = name_attr(attr.value(), file, unit, ctx, sections, 16)?;
}
}
_ => {}
};
}
Err(e) => return Err(e),
}
}
let mut inlined_functions = Vec::new();
let mut inlined_addresses = Vec::new();
Function::parse_children(
&mut entries,
depth,
file,
unit,
ctx,
sections,
&mut inlined_functions,
&mut inlined_addresses,
0,
)?;
// Sort ranges in "breadth-first traversal order", i.e. first by call_depth
// and then by range.begin. This allows finding the range containing an
// address at a certain depth using binary search.
// Note: Using DFS order, i.e. ordering by range.begin first and then by
// call_depth, would not work! Consider the two examples
// "[0..10 at depth 0], [0..2 at depth 1], [6..8 at depth 1]" and
// "[0..5 at depth 0], [0..2 at depth 1], [5..10 at depth 0], [6..8 at depth 1]".
// In this example, if you want to look up address 7 at depth 0, and you
// encounter [0..2 at depth 1], are you before or after the target range?
// You don't know.
inlined_addresses.sort_by(|r1, r2| {
if r1.call_depth < r2.call_depth {
Ordering::Less
} else if r1.call_depth > r2.call_depth {
Ordering::Greater
} else if r1.range.begin < r2.range.begin {
Ordering::Less
} else if r1.range.begin > r2.range.begin {
Ordering::Greater
} else {
Ordering::Equal
}
});
Ok(Function {
dw_die_offset,
name,
inlined_functions: inlined_functions.into_boxed_slice(),
inlined_addresses: inlined_addresses.into_boxed_slice(),
})
}
fn parse_children(
entries: &mut gimli::EntriesRaw<'_, '_, R>,
depth: isize,
file: DebugFile,
unit: &gimli::Unit<R>,
ctx: &Context<R>,
sections: &gimli::Dwarf<R>,
inlined_functions: &mut Vec<InlinedFunction<R>>,
inlined_addresses: &mut Vec<InlinedFunctionAddress>,
inlined_depth: usize,
) -> Result<(), Error> {
loop {
let dw_die_offset = entries.next_offset();
let next_depth = entries.next_depth();
if next_depth <= depth {
return Ok(());
}
if let Some(abbrev) = entries.read_abbreviation()? {
match abbrev.tag() {
gimli::DW_TAG_subprogram => {
Function::skip(entries, abbrev, next_depth)?;
}
gimli::DW_TAG_inlined_subroutine => {
InlinedFunction::parse(
dw_die_offset,
entries,
abbrev,
next_depth,
file,
unit,
ctx,
sections,
inlined_functions,
inlined_addresses,
inlined_depth,
)?;
}
_ => {
entries.skip_attributes(abbrev.attributes())?;
}
}
}
}
}
fn skip(
entries: &mut gimli::EntriesRaw<'_, '_, R>,
abbrev: &gimli::Abbreviation,
depth: isize,
) -> Result<(), Error> {
// TODO: use DW_AT_sibling
entries.skip_attributes(abbrev.attributes())?;
while entries.next_depth() > depth {
if let Some(abbrev) = entries.read_abbreviation()? {
entries.skip_attributes(abbrev.attributes())?;
}
}
Ok(())
}
/// Build the list of inlined functions that contain `probe`.
pub(crate) fn find_inlined_functions(
&self,
probe: u64,
) -> iter::Rev<maybe_small::IntoIter<&InlinedFunction<R>>> {
// `inlined_functions` is ordered from outside to inside.
let mut inlined_functions = maybe_small::Vec::new();
let mut inlined_addresses = &self.inlined_addresses[..];
loop {
let current_depth = inlined_functions.len();
// Look up (probe, current_depth) in inline_ranges.
// `inlined_addresses` is sorted in "breadth-first traversal order", i.e.
// by `call_depth` first, and then by `range.begin`. See the comment at
// the sort call for more information about why.
let search = inlined_addresses.binary_search_by(|range| {
if range.call_depth > current_depth {
Ordering::Greater
} else if range.call_depth < current_depth {
Ordering::Less
} else if range.range.begin > probe {
Ordering::Greater
} else if range.range.end <= probe {
Ordering::Less
} else {
Ordering::Equal
}
});
if let Ok(index) = search {
let function_index = inlined_addresses[index].function;
inlined_functions.push(&self.inlined_functions[function_index]);
inlined_addresses = &inlined_addresses[index + 1..];
} else {
break;
}
}
inlined_functions.into_iter().rev()
}
}
impl<R: gimli::Reader> InlinedFunction<R> {
fn parse(
dw_die_offset: gimli::UnitOffset<R::Offset>,
entries: &mut gimli::EntriesRaw<'_, '_, R>,
abbrev: &gimli::Abbreviation,
depth: isize,
file: DebugFile,
unit: &gimli::Unit<R>,
ctx: &Context<R>,
sections: &gimli::Dwarf<R>,
inlined_functions: &mut Vec<InlinedFunction<R>>,
inlined_addresses: &mut Vec<InlinedFunctionAddress>,
inlined_depth: usize,
) -> Result<(), Error> {
let mut ranges = RangeAttributes::default();
let mut name = None;
let mut call_file = None;
let mut call_line = 0;
let mut call_column = 0;
for spec in abbrev.attributes() {
match entries.read_attribute(*spec) {
Ok(ref attr) => match attr.name() {
gimli::DW_AT_low_pc => match attr.value() {
gimli::AttributeValue::Addr(val) => ranges.low_pc = Some(val),
gimli::AttributeValue::DebugAddrIndex(index) => {
ranges.low_pc = Some(sections.address(unit, index)?);
}
_ => {}
},
gimli::DW_AT_high_pc => match attr.value() {
gimli::AttributeValue::Addr(val) => ranges.high_pc = Some(val),
gimli::AttributeValue::DebugAddrIndex(index) => {
ranges.high_pc = Some(sections.address(unit, index)?);
}
gimli::AttributeValue::Udata(val) => ranges.size = Some(val),
_ => {}
},
gimli::DW_AT_ranges => {
ranges.ranges_offset = sections.attr_ranges_offset(unit, attr.value())?;
}
gimli::DW_AT_linkage_name | gimli::DW_AT_MIPS_linkage_name => {
if let Ok(val) = sections.attr_string(unit, attr.value()) {
name = Some(val);
}
}
gimli::DW_AT_name => {
if name.is_none() {
name = sections.attr_string(unit, attr.value()).ok();
}
}
gimli::DW_AT_abstract_origin | gimli::DW_AT_specification => {
if name.is_none() {
name = name_attr(attr.value(), file, unit, ctx, sections, 16)?;
}
}
gimli::DW_AT_call_file => {
// There is a spec issue [1] with how DW_AT_call_file is specified in DWARF 5.
// Before, a file index of 0 would indicate no source file, however in
// DWARF 5 this could be a valid index into the file table.
//
// Implementations such as LLVM generates a file index of 0 when DWARF 5 is
// used.
//
// Thus, if we see a version of 5 or later, treat a file index of 0 as such.
// [1]: http://wiki.dwarfstd.org/index.php?title=DWARF5_Line_Table_File_Numbers
if let gimli::AttributeValue::FileIndex(fi) = attr.value() {
if fi > 0 || unit.header.version() >= 5 {
call_file = Some(fi);
}
}
}
gimli::DW_AT_call_line => {
call_line = attr.udata_value().unwrap_or(0) as u32;
}
gimli::DW_AT_call_column => {
call_column = attr.udata_value().unwrap_or(0) as u32;
}
_ => {}
},
Err(e) => return Err(e),
}
}
let function_index = inlined_functions.len();
inlined_functions.push(InlinedFunction {
dw_die_offset,
name,
call_file,
call_line,
call_column,
});
ranges.for_each_range(sections, unit, |range| {
inlined_addresses.push(InlinedFunctionAddress {
range,
call_depth: inlined_depth,
function: function_index,
});
})?;
Function::parse_children(
entries,
depth,
file,
unit,
ctx,
sections,
inlined_functions,
inlined_addresses,
inlined_depth + 1,
)
}
}
fn name_attr<R>(
attr: gimli::AttributeValue<R>,
mut file: DebugFile,
unit: &gimli::Unit<R>,
ctx: &Context<R>,
sections: &gimli::Dwarf<R>,
recursion_limit: usize,
) -> Result<Option<R>, Error>
where
R: gimli::Reader,
{
if recursion_limit == 0 {
return Ok(None);
}
match attr {
gimli::AttributeValue::UnitRef(offset) => {
name_entry(file, unit, offset, ctx, sections, recursion_limit)
}
gimli::AttributeValue::DebugInfoRef(dr) => {
let (unit, offset) = ctx.find_unit(dr, file)?;
name_entry(file, unit, offset, ctx, sections, recursion_limit)
}
gimli::AttributeValue::DebugInfoRefSup(dr) => {
if let Some(sup_sections) = sections.sup.as_ref() {
file = DebugFile::Supplementary;
let (unit, offset) = ctx.find_unit(dr, file)?;
name_entry(file, unit, offset, ctx, sup_sections, recursion_limit)
} else {
Ok(None)
}
}
_ => Ok(None),
}
}
fn name_entry<R>(
file: DebugFile,
unit: &gimli::Unit<R>,
offset: gimli::UnitOffset<R::Offset>,
ctx: &Context<R>,
sections: &gimli::Dwarf<R>,
recursion_limit: usize,
) -> Result<Option<R>, Error>
where
R: gimli::Reader,
{
let mut entries = unit.entries_raw(Some(offset))?;
let abbrev = if let Some(abbrev) = entries.read_abbreviation()? {
abbrev
} else {
return Err(gimli::Error::NoEntryAtGivenOffset);
};
let mut name = None;
let mut next = None;
for spec in abbrev.attributes() {
match entries.read_attribute(*spec) {
Ok(ref attr) => match attr.name() {
gimli::DW_AT_linkage_name | gimli::DW_AT_MIPS_linkage_name => {
if let Ok(val) = sections.attr_string(unit, attr.value()) {
return Ok(Some(val));
}
}
gimli::DW_AT_name => {
if let Ok(val) = sections.attr_string(unit, attr.value()) {
name = Some(val);
}
}
gimli::DW_AT_abstract_origin | gimli::DW_AT_specification => {
next = Some(attr.value());
}
_ => {}
},
Err(e) => return Err(e),
}
}
if name.is_some() {
return Ok(name);
}
if let Some(next) = next {
return name_attr(next, file, unit, ctx, sections, recursion_limit - 1);
}
Ok(None)
}

View File

@@ -1,31 +0,0 @@
use core::cell::UnsafeCell;
pub struct LazyCell<T> {
contents: UnsafeCell<Option<T>>,
}
impl<T> LazyCell<T> {
pub fn new() -> LazyCell<T> {
LazyCell {
contents: UnsafeCell::new(None),
}
}
pub fn borrow(&self) -> Option<&T> {
unsafe { &*self.contents.get() }.as_ref()
}
pub fn borrow_with(&self, closure: impl FnOnce() -> T) -> &T {
// First check if we're already initialized...
let ptr = self.contents.get();
if let Some(val) = unsafe { &*ptr } {
return val;
}
// Note that while we're executing `closure` our `borrow_with` may
// be called recursively. This means we need to check again after
// the closure has executed. For that we use the `get_or_insert`
// method which will only perform mutation if we aren't already
// `Some`.
let val = closure();
unsafe { (*ptr).get_or_insert(val) }
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,126 +0,0 @@
use addr2line::Context;
use fallible_iterator::FallibleIterator;
use findshlibs::{IterationControl, SharedLibrary, TargetSharedLibrary};
use object::Object;
use std::borrow::Cow;
use std::fs::File;
use std::sync::Arc;
fn find_debuginfo() -> memmap2::Mmap {
let path = std::env::current_exe().unwrap();
let file = File::open(&path).unwrap();
let map = unsafe { memmap2::Mmap::map(&file).unwrap() };
let file = &object::File::parse(&*map).unwrap();
if let Ok(uuid) = file.mach_uuid() {
for candidate in path.parent().unwrap().read_dir().unwrap() {
let path = candidate.unwrap().path();
if !path.to_str().unwrap().ends_with(".dSYM") {
continue;
}
for candidate in path.join("Contents/Resources/DWARF").read_dir().unwrap() {
let path = candidate.unwrap().path();
let file = File::open(&path).unwrap();
let map = unsafe { memmap2::Mmap::map(&file).unwrap() };
let file = &object::File::parse(&*map).unwrap();
if file.mach_uuid().unwrap() == uuid {
return map;
}
}
}
}
return map;
}
#[test]
fn correctness() {
let map = find_debuginfo();
let file = &object::File::parse(&*map).unwrap();
let module_base = file.relative_address_base();
let endian = if file.is_little_endian() {
gimli::RunTimeEndian::Little
} else {
gimli::RunTimeEndian::Big
};
fn load_section<'data: 'file, 'file, O, Endian>(
id: gimli::SectionId,
file: &'file O,
endian: Endian,
) -> Result<gimli::EndianArcSlice<Endian>, gimli::Error>
where
O: object::Object<'data, 'file>,
Endian: gimli::Endianity,
{
use object::ObjectSection;
let data = file
.section_by_name(id.name())
.and_then(|section| section.uncompressed_data().ok())
.unwrap_or(Cow::Borrowed(&[]));
Ok(gimli::EndianArcSlice::new(Arc::from(&*data), endian))
}
let dwarf = gimli::Dwarf::load(|id| load_section(id, file, endian)).unwrap();
let ctx = Context::from_dwarf(dwarf).unwrap();
let mut split_dwarf_loader = addr2line::builtin_split_dwarf_loader::SplitDwarfLoader::new(
|data, endian| gimli::EndianArcSlice::new(Arc::from(&*data), endian),
None,
);
let mut bias = None;
TargetSharedLibrary::each(|lib| {
bias = Some((lib.virtual_memory_bias().0 as u64).wrapping_sub(module_base));
IterationControl::Break
});
#[allow(unused_mut)]
let mut test = |sym: u64, expected_prefix: &str| {
let ip = sym.wrapping_sub(bias.unwrap());
let frames = ctx.find_frames(ip);
let frames = split_dwarf_loader.run(frames).unwrap();
let frame = frames.last().unwrap().unwrap();
let name = frame.function.as_ref().unwrap().demangle().unwrap();
// Old rust versions generate DWARF with wrong linkage name,
// so only check the start.
if !name.starts_with(expected_prefix) {
panic!("incorrect name '{}', expected {:?}", name, expected_prefix);
}
};
test(test_function as u64, "correctness::test_function");
test(
small::test_function as u64,
"correctness::small::test_function",
);
test(auxiliary::foo as u64, "auxiliary::foo");
}
mod small {
pub fn test_function() {
println!("y");
}
}
fn test_function() {
println!("x");
}
#[test]
fn zero_function() {
let map = find_debuginfo();
let file = &object::File::parse(&*map).unwrap();
let ctx = Context::new(file).unwrap();
for probe in 0..10 {
assert!(
ctx.find_frames(probe)
.skip_all_loads()
.unwrap()
.count()
.unwrap()
< 10
);
}
}

View File

@@ -1,135 +0,0 @@
use std::env;
use std::ffi::OsStr;
use std::path::Path;
use std::process::Command;
use backtrace::Backtrace;
use findshlibs::{IterationControl, SharedLibrary, TargetSharedLibrary};
use libtest_mimic::{Arguments, Failed, Trial};
#[inline(never)]
fn make_trace() -> Vec<String> {
fn foo() -> Backtrace {
bar()
}
#[inline(never)]
fn bar() -> Backtrace {
baz()
}
#[inline(always)]
fn baz() -> Backtrace {
Backtrace::new_unresolved()
}
let mut base_addr = None;
TargetSharedLibrary::each(|lib| {
base_addr = Some(lib.virtual_memory_bias().0 as isize);
IterationControl::Break
});
let addrfix = -base_addr.unwrap();
let trace = foo();
trace
.frames()
.iter()
.take(5)
.map(|x| format!("{:p}", (x.ip() as *const u8).wrapping_offset(addrfix)))
.collect()
}
fn run_cmd<P: AsRef<OsStr>>(exe: P, me: &Path, flags: Option<&str>, trace: &str) -> String {
let mut cmd = Command::new(exe);
cmd.env("LC_ALL", "C"); // GNU addr2line is localized, we aren't
cmd.env("RUST_BACKTRACE", "1"); // if a child crashes, we want to know why
if let Some(flags) = flags {
cmd.arg(flags);
}
cmd.arg("--exe").arg(me).arg(trace);
let output = cmd.output().unwrap();
assert!(output.status.success());
String::from_utf8(output.stdout).unwrap()
}
fn run_test(flags: Option<&str>) -> Result<(), Failed> {
let me = env::current_exe().unwrap();
let mut exe = me.clone();
assert!(exe.pop());
if exe.file_name().unwrap().to_str().unwrap() == "deps" {
assert!(exe.pop());
}
exe.push("examples");
exe.push("addr2line");
assert!(exe.is_file());
let trace = make_trace();
// HACK: GNU addr2line has a bug where looking up multiple addresses can cause the second
// lookup to fail. Workaround by doing one address at a time.
for addr in &trace {
let theirs = run_cmd("addr2line", &me, flags, addr);
let ours = run_cmd(&exe, &me, flags, addr);
// HACK: GNU addr2line does not tidy up paths properly, causing double slashes to be printed.
// We consider our behavior to be correct, so we fix their output to match ours.
let theirs = theirs.replace("//", "/");
assert!(
theirs == ours,
"Output not equivalent:
$ addr2line {0} --exe {1} {2}
{4}
$ {3} {0} --exe {1} {2}
{5}
",
flags.unwrap_or(""),
me.display(),
trace.join(" "),
exe.display(),
theirs,
ours
);
}
Ok(())
}
static FLAGS: &str = "aipsf";
fn make_tests() -> Vec<Trial> {
(0..(1 << FLAGS.len()))
.map(|bits| {
if bits == 0 {
None
} else {
let mut param = String::new();
param.push('-');
for (i, flag) in FLAGS.chars().enumerate() {
if (bits & (1 << i)) != 0 {
param.push(flag);
}
}
Some(param)
}
})
.map(|param| {
Trial::test(
format!("addr2line {}", param.as_ref().map_or("", String::as_str)),
move || run_test(param.as_ref().map(String::as_str)),
)
})
.collect()
}
fn main() {
if !cfg!(target_os = "linux") {
return;
}
let args = Arguments::from_args();
libtest_mimic::run(&args, make_tests()).exit();
}

View File

@@ -1,114 +0,0 @@
use std::borrow::Cow;
use std::env;
use std::fs::File;
use std::path::{self, PathBuf};
use object::Object;
fn release_fixture_path() -> PathBuf {
if let Ok(p) = env::var("ADDR2LINE_FIXTURE_PATH") {
return p.into();
}
let mut path = PathBuf::new();
if let Ok(dir) = env::var("CARGO_MANIFEST_DIR") {
path.push(dir);
}
path.push("fixtures");
path.push("addr2line-release");
path
}
fn with_file<F: FnOnce(&object::File<'_>)>(target: &path::Path, f: F) {
let file = File::open(target).unwrap();
let map = unsafe { memmap2::Mmap::map(&file).unwrap() };
let file = object::File::parse(&*map).unwrap();
f(&file)
}
fn dwarf_load<'a>(object: &object::File<'a>) -> gimli::Dwarf<Cow<'a, [u8]>> {
let load_section = |id: gimli::SectionId| -> Result<Cow<'a, [u8]>, gimli::Error> {
use object::ObjectSection;
let data = object
.section_by_name(id.name())
.and_then(|section| section.data().ok())
.unwrap_or(&[][..]);
Ok(Cow::Borrowed(data))
};
gimli::Dwarf::load(&load_section).unwrap()
}
fn dwarf_borrow<'a>(
dwarf: &'a gimli::Dwarf<Cow<'_, [u8]>>,
) -> gimli::Dwarf<gimli::EndianSlice<'a, gimli::LittleEndian>> {
let borrow_section: &dyn for<'b> Fn(
&'b Cow<'_, [u8]>,
) -> gimli::EndianSlice<'b, gimli::LittleEndian> =
&|section| gimli::EndianSlice::new(section, gimli::LittleEndian);
dwarf.borrow(&borrow_section)
}
#[test]
fn parse_base_rc() {
let target = release_fixture_path();
with_file(&target, |file| {
addr2line::ObjectContext::new(file).unwrap();
});
}
#[test]
fn parse_base_slice() {
let target = release_fixture_path();
with_file(&target, |file| {
let dwarf = dwarf_load(file);
let dwarf = dwarf_borrow(&dwarf);
addr2line::Context::from_dwarf(dwarf).unwrap();
});
}
#[test]
fn parse_lines_rc() {
let target = release_fixture_path();
with_file(&target, |file| {
let context = addr2line::ObjectContext::new(file).unwrap();
context.parse_lines().unwrap();
});
}
#[test]
fn parse_lines_slice() {
let target = release_fixture_path();
with_file(&target, |file| {
let dwarf = dwarf_load(file);
let dwarf = dwarf_borrow(&dwarf);
let context = addr2line::Context::from_dwarf(dwarf).unwrap();
context.parse_lines().unwrap();
});
}
#[test]
fn parse_functions_rc() {
let target = release_fixture_path();
with_file(&target, |file| {
let context = addr2line::ObjectContext::new(file).unwrap();
context.parse_functions().unwrap();
});
}
#[test]
fn parse_functions_slice() {
let target = release_fixture_path();
with_file(&target, |file| {
let dwarf = dwarf_load(file);
let dwarf = dwarf_borrow(&dwarf);
let context = addr2line::Context::from_dwarf(dwarf).unwrap();
context.parse_functions().unwrap();
});
}

View File

@@ -1 +0,0 @@
{"files":{"CHANGELOG.md":"737088e45fdf27fe2cfedce163332d8ce08c58fd86ca287de2de34c0fbaf63e7","Cargo.toml":"f410869f0f1a5697f65a8a77be03da7aeecc0be26e7cf3a1feb1acaa4f518770","LICENSE-0BSD":"861399f8c21c042b110517e76dc6b63a2b334276c8cf17412fc3c8908ca8dc17","LICENSE-APACHE":"8ada45cd9f843acf64e4722ae262c622a2b3b3007c7310ef36ac1061a30f6adb","LICENSE-MIT":"23f18e03dc49df91622fe2a76176497404e46ced8a715d9d2b67a7446571cca3","README.md":"308c50cdb42b9573743068158339570b45ca3f895015ca3b87ba983edb0a21e6","RELEASE_PROCESS.md":"a86cd10fc70f167f8d00e9e4ce0c6b4ebdfa1865058390dffd1e0ad4d3e68d9d","benches/bench.rs":"c07ce370e3680c602e415f8d1ec4e543ea2163ab22a09b6b82d93e8a30adca82","src/algo.rs":"b664b131f724a809591394a10b9023f40ab5963e32a83fa3163c2668e59c8b66","src/lib.rs":"b55ba9c629b30360d08168b2ca0c96275432856a539737a105a6d6ae6bf7e88f"},"package":"f26201604c87b1e01bd3d98f8d5d9a8fcbb815e8cedb41ffccbeb4bf593a35fe"}

View File

@@ -1,63 +0,0 @@
# Changelog
## Unreleased
No changes.
## [1.0.2 - 2021-02-26](https://github.com/jonas-schievink/adler/releases/tag/v1.0.2)
- Fix doctest on big-endian systems ([#9]).
[#9]: https://github.com/jonas-schievink/adler/pull/9
## [1.0.1 - 2020-11-08](https://github.com/jonas-schievink/adler/releases/tag/v1.0.1)
### Fixes
- Fix documentation on docs.rs.
## [1.0.0 - 2020-11-08](https://github.com/jonas-schievink/adler/releases/tag/v1.0.0)
### Fixes
- Fix `cargo test --no-default-features` ([#5]).
### Improvements
- Extended and clarified documentation.
- Added more rustdoc examples.
- Extended CI to test the crate with `--no-default-features`.
### Breaking Changes
- `adler32_reader` now takes its generic argument by value instead of as a `&mut`.
- Renamed `adler32_reader` to `adler32`.
## [0.2.3 - 2020-07-11](https://github.com/jonas-schievink/adler/releases/tag/v0.2.3)
- Process 4 Bytes at a time, improving performance by up to 50% ([#2]).
## [0.2.2 - 2020-06-27](https://github.com/jonas-schievink/adler/releases/tag/v0.2.2)
- Bump MSRV to 1.31.0.
## [0.2.1 - 2020-06-27](https://github.com/jonas-schievink/adler/releases/tag/v0.2.1)
- Add a few `#[inline]` annotations to small functions.
- Fix CI badge.
- Allow integration into libstd.
## [0.2.0 - 2020-06-27](https://github.com/jonas-schievink/adler/releases/tag/v0.2.0)
- Support `#![no_std]` when using `default-features = false`.
- Improve performance by around 7x.
- Support Rust 1.8.0.
- Improve API naming.
## [0.1.0 - 2020-06-26](https://github.com/jonas-schievink/adler/releases/tag/v0.1.0)
Initial release.
[#2]: https://github.com/jonas-schievink/adler/pull/2
[#5]: https://github.com/jonas-schievink/adler/pull/5

View File

@@ -1,64 +0,0 @@
# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO
#
# When uploading crates to the registry Cargo will automatically
# "normalize" Cargo.toml files for maximal compatibility
# with all versions of Cargo and also rewrite `path` dependencies
# to registry (e.g., crates.io) dependencies
#
# If you believe there's an error in this file please file an
# issue against the rust-lang/cargo repository. If you're
# editing this file be aware that the upstream Cargo.toml
# will likely look very different (and much more reasonable)
[package]
name = "adler"
version = "1.0.2"
authors = ["Jonas Schievink <jonasschievink@gmail.com>"]
description = "A simple clean-room implementation of the Adler-32 checksum"
documentation = "https://docs.rs/adler/"
readme = "README.md"
keywords = ["checksum", "integrity", "hash", "adler32", "zlib"]
categories = ["algorithms"]
license = "0BSD OR MIT OR Apache-2.0"
repository = "https://github.com/jonas-schievink/adler.git"
[package.metadata.docs.rs]
rustdoc-args = ["--cfg=docsrs"]
[package.metadata.release]
no-dev-version = true
pre-release-commit-message = "Release {{version}}"
tag-message = "{{version}}"
[[package.metadata.release.pre-release-replacements]]
file = "CHANGELOG.md"
replace = "## Unreleased\n\nNo changes.\n\n## [{{version}} - {{date}}](https://github.com/jonas-schievink/adler/releases/tag/v{{version}})\n"
search = "## Unreleased\n"
[[package.metadata.release.pre-release-replacements]]
file = "README.md"
replace = "adler = \"{{version}}\""
search = "adler = \"[a-z0-9\\\\.-]+\""
[[package.metadata.release.pre-release-replacements]]
file = "src/lib.rs"
replace = "https://docs.rs/adler/{{version}}"
search = "https://docs.rs/adler/[a-z0-9\\.-]+"
[[bench]]
name = "bench"
harness = false
[dependencies.compiler_builtins]
version = "0.1.2"
optional = true
[dependencies.core]
version = "1.0.0"
optional = true
package = "rustc-std-workspace-core"
[dev-dependencies.criterion]
version = "0.3.2"
[features]
default = ["std"]
rustc-dep-of-std = ["core", "compiler_builtins"]
std = []

View File

@@ -1,12 +0,0 @@
Copyright (C) Jonas Schievink <jonasschievink@gmail.com>
Permission to use, copy, modify, and/or distribute this software for
any purpose with or without fee is hereby granted.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN
AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

View File

@@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
https://www.apache.org/licenses/LICENSE-2.0
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,23 +0,0 @@
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.

View File

@@ -1,39 +0,0 @@
# Adler-32 checksums for Rust
[![crates.io](https://img.shields.io/crates/v/adler.svg)](https://crates.io/crates/adler)
[![docs.rs](https://docs.rs/adler/badge.svg)](https://docs.rs/adler/)
![CI](https://github.com/jonas-schievink/adler/workflows/CI/badge.svg)
This crate provides a simple implementation of the Adler-32 checksum, used in
the zlib compression format.
Please refer to the [changelog](CHANGELOG.md) to see what changed in the last
releases.
## Features
- Permissively licensed (0BSD) clean-room implementation.
- Zero dependencies.
- Zero `unsafe`.
- Decent performance (3-4 GB/s).
- Supports `#![no_std]` (with `default-features = false`).
## Usage
Add an entry to your `Cargo.toml`:
```toml
[dependencies]
adler = "1.0.2"
```
Check the [API Documentation](https://docs.rs/adler/) for how to use the
crate's functionality.
## Rust version support
Currently, this crate supports all Rust versions starting at Rust 1.31.0.
Bumping the Minimum Supported Rust Version (MSRV) is *not* considered a breaking
change, but will not be done without good reasons. The latest 3 stable Rust
versions will always be supported no matter what.

View File

@@ -1,13 +0,0 @@
# What to do to publish a new release
1. Ensure all notable changes are in the changelog under "Unreleased".
2. Execute `cargo release <level>` to bump version(s), tag and publish
everything. External subcommand, must be installed with `cargo install
cargo-release`.
`<level>` can be one of `major|minor|patch`. If this is the first release
(`0.1.0`), use `minor`, since the version starts out as `0.0.0`.
3. Go to the GitHub releases, edit the just-pushed tag. Copy the release notes
from the changelog.

View File

@@ -1,109 +0,0 @@
extern crate adler;
extern crate criterion;
use adler::{adler32_slice, Adler32};
use criterion::{criterion_group, criterion_main, Criterion, Throughput};
fn simple(c: &mut Criterion) {
{
const SIZE: usize = 100;
let mut group = c.benchmark_group("simple-100b");
group.throughput(Throughput::Bytes(SIZE as u64));
group.bench_function("zeroes-100", |bencher| {
bencher.iter(|| {
adler32_slice(&[0; SIZE]);
});
});
group.bench_function("ones-100", |bencher| {
bencher.iter(|| {
adler32_slice(&[0xff; SIZE]);
});
});
}
{
const SIZE: usize = 1024;
let mut group = c.benchmark_group("simple-1k");
group.throughput(Throughput::Bytes(SIZE as u64));
group.bench_function("zeroes-1k", |bencher| {
bencher.iter(|| {
adler32_slice(&[0; SIZE]);
});
});
group.bench_function("ones-1k", |bencher| {
bencher.iter(|| {
adler32_slice(&[0xff; SIZE]);
});
});
}
{
const SIZE: usize = 1024 * 1024;
let mut group = c.benchmark_group("simple-1m");
group.throughput(Throughput::Bytes(SIZE as u64));
group.bench_function("zeroes-1m", |bencher| {
bencher.iter(|| {
adler32_slice(&[0; SIZE]);
});
});
group.bench_function("ones-1m", |bencher| {
bencher.iter(|| {
adler32_slice(&[0xff; SIZE]);
});
});
}
}
fn chunked(c: &mut Criterion) {
const SIZE: usize = 16 * 1024 * 1024;
let data = vec![0xAB; SIZE];
let mut group = c.benchmark_group("chunked-16m");
group.throughput(Throughput::Bytes(SIZE as u64));
group.bench_function("5552", |bencher| {
bencher.iter(|| {
let mut h = Adler32::new();
for chunk in data.chunks(5552) {
h.write_slice(chunk);
}
h.checksum()
});
});
group.bench_function("8k", |bencher| {
bencher.iter(|| {
let mut h = Adler32::new();
for chunk in data.chunks(8 * 1024) {
h.write_slice(chunk);
}
h.checksum()
});
});
group.bench_function("64k", |bencher| {
bencher.iter(|| {
let mut h = Adler32::new();
for chunk in data.chunks(64 * 1024) {
h.write_slice(chunk);
}
h.checksum()
});
});
group.bench_function("1m", |bencher| {
bencher.iter(|| {
let mut h = Adler32::new();
for chunk in data.chunks(1024 * 1024) {
h.write_slice(chunk);
}
h.checksum()
});
});
}
criterion_group!(benches, simple, chunked);
criterion_main!(benches);

View File

@@ -1,146 +0,0 @@
use crate::Adler32;
use std::ops::{AddAssign, MulAssign, RemAssign};
impl Adler32 {
pub(crate) fn compute(&mut self, bytes: &[u8]) {
// The basic algorithm is, for every byte:
// a = (a + byte) % MOD
// b = (b + a) % MOD
// where MOD = 65521.
//
// For efficiency, we can defer the `% MOD` operations as long as neither a nor b overflows:
// - Between calls to `write`, we ensure that a and b are always in range 0..MOD.
// - We use 32-bit arithmetic in this function.
// - Therefore, a and b must not increase by more than 2^32-MOD without performing a `% MOD`
// operation.
//
// According to Wikipedia, b is calculated as follows for non-incremental checksumming:
// b = n×D1 + (n1)×D2 + (n2)×D3 + ... + Dn + n*1 (mod 65521)
// Where n is the number of bytes and Di is the i-th Byte. We need to change this to account
// for the previous values of a and b, as well as treat every input Byte as being 255:
// b_inc = n×255 + (n-1)×255 + ... + 255 + n*65520
// Or in other words:
// b_inc = n*65520 + n(n+1)/2*255
// The max chunk size is thus the largest value of n so that b_inc <= 2^32-65521.
// 2^32-65521 = n*65520 + n(n+1)/2*255
// Plugging this into an equation solver since I can't math gives n = 5552.18..., so 5552.
//
// On top of the optimization outlined above, the algorithm can also be parallelized with a
// bit more work:
//
// Note that b is a linear combination of a vector of input bytes (D1, ..., Dn).
//
// If we fix some value k<N and rewrite indices 1, ..., N as
//
// 1_1, 1_2, ..., 1_k, 2_1, ..., 2_k, ..., (N/k)_k,
//
// then we can express a and b in terms of sums of smaller sequences kb and ka:
//
// ka(j) := D1_j + D2_j + ... + D(N/k)_j where j <= k
// kb(j) := (N/k)*D1_j + (N/k-1)*D2_j + ... + D(N/k)_j where j <= k
//
// a = ka(1) + ka(2) + ... + ka(k) + 1
// b = k*(kb(1) + kb(2) + ... + kb(k)) - 1*ka(2) - ... - (k-1)*ka(k) + N
//
// We use this insight to unroll the main loop and process k=4 bytes at a time.
// The resulting code is highly amenable to SIMD acceleration, although the immediate speedups
// stem from increased pipeline parallelism rather than auto-vectorization.
//
// This technique is described in-depth (here:)[https://software.intel.com/content/www/us/\
// en/develop/articles/fast-computation-of-fletcher-checksums.html]
const MOD: u32 = 65521;
const CHUNK_SIZE: usize = 5552 * 4;
let mut a = u32::from(self.a);
let mut b = u32::from(self.b);
let mut a_vec = U32X4([0; 4]);
let mut b_vec = a_vec;
let (bytes, remainder) = bytes.split_at(bytes.len() - bytes.len() % 4);
// iterate over 4 bytes at a time
let chunk_iter = bytes.chunks_exact(CHUNK_SIZE);
let remainder_chunk = chunk_iter.remainder();
for chunk in chunk_iter {
for byte_vec in chunk.chunks_exact(4) {
let val = U32X4::from(byte_vec);
a_vec += val;
b_vec += a_vec;
}
b += CHUNK_SIZE as u32 * a;
a_vec %= MOD;
b_vec %= MOD;
b %= MOD;
}
// special-case the final chunk because it may be shorter than the rest
for byte_vec in remainder_chunk.chunks_exact(4) {
let val = U32X4::from(byte_vec);
a_vec += val;
b_vec += a_vec;
}
b += remainder_chunk.len() as u32 * a;
a_vec %= MOD;
b_vec %= MOD;
b %= MOD;
// combine the sub-sum results into the main sum
b_vec *= 4;
b_vec.0[1] += MOD - a_vec.0[1];
b_vec.0[2] += (MOD - a_vec.0[2]) * 2;
b_vec.0[3] += (MOD - a_vec.0[3]) * 3;
for &av in a_vec.0.iter() {
a += av;
}
for &bv in b_vec.0.iter() {
b += bv;
}
// iterate over the remaining few bytes in serial
for &byte in remainder.iter() {
a += u32::from(byte);
b += a;
}
self.a = (a % MOD) as u16;
self.b = (b % MOD) as u16;
}
}
#[derive(Copy, Clone)]
struct U32X4([u32; 4]);
impl U32X4 {
fn from(bytes: &[u8]) -> Self {
U32X4([
u32::from(bytes[0]),
u32::from(bytes[1]),
u32::from(bytes[2]),
u32::from(bytes[3]),
])
}
}
impl AddAssign<Self> for U32X4 {
fn add_assign(&mut self, other: Self) {
for (s, o) in self.0.iter_mut().zip(other.0.iter()) {
*s += o;
}
}
}
impl RemAssign<u32> for U32X4 {
fn rem_assign(&mut self, quotient: u32) {
for s in self.0.iter_mut() {
*s %= quotient;
}
}
}
impl MulAssign<u32> for U32X4 {
fn mul_assign(&mut self, rhs: u32) {
for s in self.0.iter_mut() {
*s *= rhs;
}
}
}

View File

@@ -1,287 +0,0 @@
//! Adler-32 checksum implementation.
//!
//! This implementation features:
//!
//! - Permissively licensed (0BSD) clean-room implementation.
//! - Zero dependencies.
//! - Zero `unsafe`.
//! - Decent performance (3-4 GB/s).
//! - `#![no_std]` support (with `default-features = false`).
#![doc(html_root_url = "https://docs.rs/adler/1.0.2")]
// Deny a few warnings in doctests, since rustdoc `allow`s many warnings by default
#![doc(test(attr(deny(unused_imports, unused_must_use))))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![warn(missing_debug_implementations)]
#![forbid(unsafe_code)]
#![cfg_attr(not(feature = "std"), no_std)]
#[cfg(not(feature = "std"))]
extern crate core as std;
mod algo;
use std::hash::Hasher;
#[cfg(feature = "std")]
use std::io::{self, BufRead};
/// Adler-32 checksum calculator.
///
/// An instance of this type is equivalent to an Adler-32 checksum: It can be created in the default
/// state via [`new`] (or the provided `Default` impl), or from a precalculated checksum via
/// [`from_checksum`], and the currently stored checksum can be fetched via [`checksum`].
///
/// This type also implements `Hasher`, which makes it easy to calculate Adler-32 checksums of any
/// type that implements or derives `Hash`. This also allows using Adler-32 in a `HashMap`, although
/// that is not recommended (while every checksum is a hash function, they are not necessarily a
/// good one).
///
/// # Examples
///
/// Basic, piecewise checksum calculation:
///
/// ```
/// use adler::Adler32;
///
/// let mut adler = Adler32::new();
///
/// adler.write_slice(&[0, 1, 2]);
/// adler.write_slice(&[3, 4, 5]);
///
/// assert_eq!(adler.checksum(), 0x00290010);
/// ```
///
/// Using `Hash` to process structures:
///
/// ```
/// use std::hash::Hash;
/// use adler::Adler32;
///
/// #[derive(Hash)]
/// struct Data {
/// byte: u8,
/// word: u16,
/// big: u64,
/// }
///
/// let mut adler = Adler32::new();
///
/// let data = Data { byte: 0x1F, word: 0xABCD, big: !0 };
/// data.hash(&mut adler);
///
/// // hash value depends on architecture endianness
/// if cfg!(target_endian = "little") {
/// assert_eq!(adler.checksum(), 0x33410990);
/// }
/// if cfg!(target_endian = "big") {
/// assert_eq!(adler.checksum(), 0x331F0990);
/// }
///
/// ```
///
/// [`new`]: #method.new
/// [`from_checksum`]: #method.from_checksum
/// [`checksum`]: #method.checksum
#[derive(Debug, Copy, Clone)]
pub struct Adler32 {
a: u16,
b: u16,
}
impl Adler32 {
/// Creates a new Adler-32 instance with default state.
#[inline]
pub fn new() -> Self {
Self::default()
}
/// Creates an `Adler32` instance from a precomputed Adler-32 checksum.
///
/// This allows resuming checksum calculation without having to keep the `Adler32` instance
/// around.
///
/// # Example
///
/// ```
/// # use adler::Adler32;
/// let parts = [
/// "rust",
/// "acean",
/// ];
/// let whole = adler::adler32_slice(b"rustacean");
///
/// let mut sum = Adler32::new();
/// sum.write_slice(parts[0].as_bytes());
/// let partial = sum.checksum();
///
/// // ...later
///
/// let mut sum = Adler32::from_checksum(partial);
/// sum.write_slice(parts[1].as_bytes());
/// assert_eq!(sum.checksum(), whole);
/// ```
#[inline]
pub fn from_checksum(sum: u32) -> Self {
Adler32 {
a: sum as u16,
b: (sum >> 16) as u16,
}
}
/// Returns the calculated checksum at this point in time.
#[inline]
pub fn checksum(&self) -> u32 {
(u32::from(self.b) << 16) | u32::from(self.a)
}
/// Adds `bytes` to the checksum calculation.
///
/// If efficiency matters, this should be called with Byte slices that contain at least a few
/// thousand Bytes.
pub fn write_slice(&mut self, bytes: &[u8]) {
self.compute(bytes);
}
}
impl Default for Adler32 {
#[inline]
fn default() -> Self {
Adler32 { a: 1, b: 0 }
}
}
impl Hasher for Adler32 {
#[inline]
fn finish(&self) -> u64 {
u64::from(self.checksum())
}
fn write(&mut self, bytes: &[u8]) {
self.write_slice(bytes);
}
}
/// Calculates the Adler-32 checksum of a byte slice.
///
/// This is a convenience function around the [`Adler32`] type.
///
/// [`Adler32`]: struct.Adler32.html
pub fn adler32_slice(data: &[u8]) -> u32 {
let mut h = Adler32::new();
h.write_slice(data);
h.checksum()
}
/// Calculates the Adler-32 checksum of a `BufRead`'s contents.
///
/// The passed `BufRead` implementor will be read until it reaches EOF (or until it reports an
/// error).
///
/// If you only have a `Read` implementor, you can wrap it in `std::io::BufReader` before calling
/// this function.
///
/// # Errors
///
/// Any error returned by the reader are bubbled up by this function.
///
/// # Examples
///
/// ```no_run
/// # fn run() -> Result<(), Box<dyn std::error::Error>> {
/// use adler::adler32;
///
/// use std::fs::File;
/// use std::io::BufReader;
///
/// let file = File::open("input.txt")?;
/// let mut file = BufReader::new(file);
///
/// adler32(&mut file)?;
/// # Ok(()) }
/// # fn main() { run().unwrap() }
/// ```
#[cfg(feature = "std")]
#[cfg_attr(docsrs, doc(cfg(feature = "std")))]
pub fn adler32<R: BufRead>(mut reader: R) -> io::Result<u32> {
let mut h = Adler32::new();
loop {
let len = {
let buf = reader.fill_buf()?;
if buf.is_empty() {
return Ok(h.checksum());
}
h.write_slice(buf);
buf.len()
};
reader.consume(len);
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn zeroes() {
assert_eq!(adler32_slice(&[]), 1);
assert_eq!(adler32_slice(&[0]), 1 | 1 << 16);
assert_eq!(adler32_slice(&[0, 0]), 1 | 2 << 16);
assert_eq!(adler32_slice(&[0; 100]), 0x00640001);
assert_eq!(adler32_slice(&[0; 1024]), 0x04000001);
assert_eq!(adler32_slice(&[0; 1024 * 1024]), 0x00f00001);
}
#[test]
fn ones() {
assert_eq!(adler32_slice(&[0xff; 1024]), 0x79a6fc2e);
assert_eq!(adler32_slice(&[0xff; 1024 * 1024]), 0x8e88ef11);
}
#[test]
fn mixed() {
assert_eq!(adler32_slice(&[1]), 2 | 2 << 16);
assert_eq!(adler32_slice(&[40]), 41 | 41 << 16);
assert_eq!(adler32_slice(&[0xA5; 1024 * 1024]), 0xd5009ab1);
}
/// Example calculation from https://en.wikipedia.org/wiki/Adler-32.
#[test]
fn wiki() {
assert_eq!(adler32_slice(b"Wikipedia"), 0x11E60398);
}
#[test]
fn resume() {
let mut adler = Adler32::new();
adler.write_slice(&[0xff; 1024]);
let partial = adler.checksum();
assert_eq!(partial, 0x79a6fc2e); // from above
adler.write_slice(&[0xff; 1024 * 1024 - 1024]);
assert_eq!(adler.checksum(), 0x8e88ef11); // from above
// Make sure that we can resume computing from the partial checksum via `from_checksum`.
let mut adler = Adler32::from_checksum(partial);
adler.write_slice(&[0xff; 1024 * 1024 - 1024]);
assert_eq!(adler.checksum(), 0x8e88ef11); // from above
}
#[cfg(feature = "std")]
#[test]
fn bufread() {
use std::io::BufReader;
fn test(data: &[u8], checksum: u32) {
// `BufReader` uses an 8 KB buffer, so this will test buffer refilling.
let mut buf = BufReader::new(data);
let real_sum = adler32(&mut buf).unwrap();
assert_eq!(checksum, real_sum);
}
test(&[], 1);
test(&[0; 1024], 0x04000001);
test(&[0; 1024 * 1024], 0x00f00001);
test(&[0xA5; 1024 * 1024], 0xd5009ab1);
}
}

View File

@@ -1 +0,0 @@
{"files":{"Cargo.lock":"e89078a9d7e89f125bea210c74fd30ef1167c208b9b240baa3fe76ec1170f6ec","Cargo.toml":"38deb1bfcca1eaef87c409274c63f9b25df94f6faaebc74061fa7ef1e4f078f1","LICENSE-APACHE":"c6596eb7be8581c18be736c846fb9173b69eccf6ef94c5135893ec56bd92ba08","LICENSE-MIT":"6efb0476a1cc085077ed49357026d8c173bf33017278ef440f222fb9cbcb66e6","README.md":"b230c2257d0c7a49b9bd97f2fa73abedcdc055757b5cedd2b0eb1a7a448ff461","benches/stream.rs":"7e666c4f4b79ddb5237361ed25264a966ee241192fbb2c1baea3006e3e0326b4","benches/strip.rs":"9603bd5ca1ae4661c2ccab50315dbfdec0c661ac2624262172bbd8f5d0bd87c9","benches/wincon.rs":"680e86933c008b242a3286c5149c33d3c086426eb99fe134b6e79f7578f96663","examples/dump-stream.rs":"54b2bce2409fc1a1f00dbdcab7abbbb6cde447fa20b5c829d1b17ce2e15eefd1","examples/query-stream.rs":"16f38843083174fbefa974a5aa38a5f3ffa51bd6e6db3dc1d91164462219399e","src/adapter/mod.rs":"baf4237ea0b18df63609e49d93572ca27c2202a4cbec0220adb5a7e815c7d8ed","src/adapter/strip.rs":"010972f96708c56da9bced98287f134ce43a4f6459c22c1697abdc4fd6f82d00","src/adapter/wincon.rs":"07d75878ca9edcef4f473a5ff6113b40aab681dcbcd1ae9de1ec895332f7cc2a","src/auto.rs":"71c249ab6b0af64c3946817ea9f1719d4b789128c244611a05075b1e13413007","src/buffer.rs":"83e7088b50dd3e2941c06a417d9eef75fda45311a2912ba94f480ec98d6f0183","src/fmt.rs":"cc11b005c4559843bd908a57958a13c8d0922fae6aff5261f3583c90e60da73c","src/lib.rs":"649b86b187835e0e33baaaf2242c5f331b7dff133fae8fc419c52b7add797c57","src/macros.rs":"a26ababe32a39732d0aade9674f6e5e267bd26c6ea06603ff9e61e80681195e0","src/stream.rs":"cbe8f61fba4c3c60934339c8bda5d1ff43320f57cdc4ed409aa173945a941b3d","src/strip.rs":"56e6516283b6c0dfa72a8e0e6679da8424295f50a3e56c44281e76de6aa0344b","src/wincon.rs":"fe5aff7bfd80b14c9a6b07143079d59b81831293ad766b845e46fad2e1459c9a"},"package":"d664a92ecae85fd0a7392615844904654d1d5f5514837f471ddef4a057aba1b6"}

1094
vendor/anstream/Cargo.lock generated vendored

File diff suppressed because it is too large Load Diff

View File

@@ -1,144 +0,0 @@
# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO
#
# When uploading crates to the registry Cargo will automatically
# "normalize" Cargo.toml files for maximal compatibility
# with all versions of Cargo and also rewrite `path` dependencies
# to registry (e.g., crates.io) dependencies.
#
# If you are reading this file be aware that the original Cargo.toml
# will likely look very different (and much more reasonable).
# See Cargo.toml.orig for the original contents.
[package]
edition = "2021"
rust-version = "1.70.0"
name = "anstream"
version = "0.6.5"
include = [
"build.rs",
"src/**/*",
"Cargo.toml",
"Cargo.lock",
"LICENSE*",
"README.md",
"benches/**/*",
"examples/**/*",
]
description = "A simple cross platform library for writing colored text to a terminal."
homepage = "https://github.com/rust-cli/anstyle"
readme = "README.md"
keywords = [
"ansi",
"terminal",
"color",
"strip",
"wincon",
]
categories = ["command-line-interface"]
license = "MIT OR Apache-2.0"
repository = "https://github.com/rust-cli/anstyle.git"
[package.metadata.docs.rs]
cargo-args = [
"-Zunstable-options",
"-Zrustdoc-scrape-examples",
]
rustdoc-args = [
"--cfg",
"docsrs",
]
[[package.metadata.release.pre-release-replacements]]
file = "CHANGELOG.md"
min = 1
replace = "{{version}}"
search = "Unreleased"
[[package.metadata.release.pre-release-replacements]]
exactly = 1
file = "CHANGELOG.md"
replace = "...{{tag_name}}"
search = '\.\.\.HEAD'
[[package.metadata.release.pre-release-replacements]]
file = "CHANGELOG.md"
min = 1
replace = "{{date}}"
search = "ReleaseDate"
[[package.metadata.release.pre-release-replacements]]
exactly = 1
file = "CHANGELOG.md"
replace = """
<!-- next-header -->
## [Unreleased] - ReleaseDate
"""
search = "<!-- next-header -->"
[[package.metadata.release.pre-release-replacements]]
exactly = 1
file = "CHANGELOG.md"
replace = """
<!-- next-url -->
[Unreleased]: https://github.com/rust-cli/anstyle/compare/{{tag_name}}...HEAD"""
search = "<!-- next-url -->"
[[bench]]
name = "strip"
harness = false
[[bench]]
name = "wincon"
harness = false
[[bench]]
name = "stream"
harness = false
[dependencies.anstyle]
version = "1.0.0"
[dependencies.anstyle-parse]
version = "0.2.0"
[dependencies.anstyle-query]
version = "1.0.0"
optional = true
[dependencies.colorchoice]
version = "1.0.0"
optional = true
[dependencies.utf8parse]
version = "0.2.1"
[dev-dependencies.criterion]
version = "0.5.1"
[dev-dependencies.lexopt]
version = "0.3.0"
[dev-dependencies.owo-colors]
version = "3.5.0"
[dev-dependencies.proptest]
version = "1.4.0"
[dev-dependencies.strip-ansi-escapes]
version = "0.2.0"
[features]
auto = [
"dep:anstyle-query",
"dep:colorchoice",
]
default = [
"auto",
"wincon",
]
test = []
wincon = ["dep:anstyle-wincon"]
[target."cfg(windows)".dependencies.anstyle-wincon]
version = "3.0.1"
optional = true

View File

@@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,19 +0,0 @@
Copyright (c) Individual contributors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -1,34 +0,0 @@
# anstream
> A simple cross platform library for writing colored text to a terminal.
*A portmanteau of "ansi stream"*
[![Documentation](https://img.shields.io/badge/docs-master-blue.svg)][Documentation]
![License](https://img.shields.io/crates/l/anstream.svg)
[![Crates Status](https://img.shields.io/crates/v/anstream.svg)](https://crates.io/crates/anstream)
Specialized `stdout` and `stderr` that accept ANSI escape codes and adapt them
based on the terminal's capabilities.
`anstream::adapter::strip_str` may also be of interest on its own for low
overhead stripping of ANSI escape codes.
## License
Licensed under either of
* Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)
* MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)
at your option.
### Contribution
Unless you explicitly state otherwise, any contribution intentionally
submitted for inclusion in the work by you, as defined in the Apache-2.0
license, shall be dual licensed as above, without any additional terms or
conditions.
[Crates.io]: https://crates.io/crates/anstream
[Documentation]: https://docs.rs/anstream

View File

@@ -1,81 +0,0 @@
use std::io::Write as _;
use criterion::{black_box, Criterion};
fn stream(c: &mut Criterion) {
for (name, content) in [
("demo.vte", &include_bytes!("../tests/demo.vte")[..]),
("rg_help.vte", &include_bytes!("../tests/rg_help.vte")[..]),
("rg_linus.vte", &include_bytes!("../tests/rg_linus.vte")[..]),
(
"state_changes",
&b"\x1b]2;X\x1b\\ \x1b[0m \x1bP0@\x1b\\"[..],
),
] {
let mut group = c.benchmark_group(name);
group.bench_function("nop", |b| {
b.iter(|| {
let buffer = Vec::with_capacity(content.len());
let mut stream = buffer;
stream.write_all(content).unwrap();
black_box(stream)
})
});
group.bench_function("StripStream", |b| {
b.iter(|| {
let buffer = Vec::with_capacity(content.len());
let mut stream = anstream::StripStream::new(buffer);
stream.write_all(content).unwrap();
black_box(stream)
})
});
#[cfg(all(windows, feature = "wincon"))]
group.bench_function("WinconStream", |b| {
b.iter(|| {
let buffer = Vec::with_capacity(content.len());
let mut stream = anstream::WinconStream::new(buffer);
stream.write_all(content).unwrap();
black_box(stream)
})
});
group.bench_function("AutoStream::always_ansi", |b| {
b.iter(|| {
let buffer = Vec::with_capacity(content.len());
let mut stream = anstream::AutoStream::always_ansi(buffer);
stream.write_all(content).unwrap();
black_box(stream)
})
});
group.bench_function("AutoStream::always", |b| {
b.iter(|| {
let buffer = Vec::with_capacity(content.len());
let mut stream = anstream::AutoStream::always(buffer);
stream.write_all(content).unwrap();
black_box(stream)
})
});
group.bench_function("AutoStream::never", |b| {
b.iter(|| {
let buffer = Vec::with_capacity(content.len());
let mut stream = anstream::AutoStream::never(buffer);
stream.write_all(content).unwrap();
black_box(stream)
})
});
}
}
criterion::criterion_group!(benches, stream);
criterion::criterion_main!(benches);

View File

@@ -1,102 +0,0 @@
use criterion::{black_box, Criterion};
#[derive(Default)]
struct Strip(String);
impl Strip {
fn with_capacity(capacity: usize) -> Self {
Self(String::with_capacity(capacity))
}
}
impl anstyle_parse::Perform for Strip {
fn print(&mut self, c: char) {
self.0.push(c);
}
fn execute(&mut self, byte: u8) {
if byte.is_ascii_whitespace() {
self.0.push(byte as char);
}
}
}
fn strip(c: &mut Criterion) {
for (name, content) in [
("demo.vte", &include_bytes!("../tests/demo.vte")[..]),
("rg_help.vte", &include_bytes!("../tests/rg_help.vte")[..]),
("rg_linus.vte", &include_bytes!("../tests/rg_linus.vte")[..]),
(
"state_changes",
&b"\x1b]2;X\x1b\\ \x1b[0m \x1bP0@\x1b\\"[..],
),
] {
// Make sure the comparison is fair
if let Ok(content) = std::str::from_utf8(content) {
let mut stripped = Strip::with_capacity(content.len());
let mut parser = anstyle_parse::Parser::<anstyle_parse::DefaultCharAccumulator>::new();
for byte in content.as_bytes() {
parser.advance(&mut stripped, *byte);
}
assert_eq!(
stripped.0,
anstream::adapter::strip_str(content).to_string()
);
assert_eq!(
stripped.0,
String::from_utf8(anstream::adapter::strip_bytes(content.as_bytes()).into_vec())
.unwrap()
);
}
let mut group = c.benchmark_group(name);
group.bench_function("advance_strip", |b| {
b.iter(|| {
let mut stripped = Strip::with_capacity(content.len());
let mut parser =
anstyle_parse::Parser::<anstyle_parse::DefaultCharAccumulator>::new();
for byte in content {
parser.advance(&mut stripped, *byte);
}
black_box(stripped.0)
})
});
group.bench_function("strip_ansi_escapes", |b| {
b.iter(|| {
let stripped = strip_ansi_escapes::strip(content);
black_box(stripped)
})
});
if let Ok(content) = std::str::from_utf8(content) {
group.bench_function("strip_str", |b| {
b.iter(|| {
let stripped = anstream::adapter::strip_str(content).to_string();
black_box(stripped)
})
});
group.bench_function("StripStr", |b| {
b.iter(|| {
let mut stripped = String::with_capacity(content.len());
let mut state = anstream::adapter::StripStr::new();
for printable in state.strip_next(content) {
stripped.push_str(printable);
}
black_box(stripped)
})
});
}
group.bench_function("strip_bytes", |b| {
b.iter(|| {
let stripped = anstream::adapter::strip_bytes(content).into_vec();
black_box(stripped)
})
});
}
}
criterion::criterion_group!(benches, strip);
criterion::criterion_main!(benches);

View File

@@ -1,26 +0,0 @@
use criterion::{black_box, Criterion};
fn wincon(c: &mut Criterion) {
for (name, content) in [
("demo.vte", &include_bytes!("../tests/demo.vte")[..]),
("rg_help.vte", &include_bytes!("../tests/rg_help.vte")[..]),
("rg_linus.vte", &include_bytes!("../tests/rg_linus.vte")[..]),
(
"state_changes",
&b"\x1b]2;X\x1b\\ \x1b[0m \x1bP0@\x1b\\"[..],
),
] {
let mut group = c.benchmark_group(name);
group.bench_function("wincon_bytes", |b| {
b.iter(|| {
let mut state = anstream::adapter::WinconBytes::new();
let stripped = state.extract_next(content).collect::<Vec<_>>();
black_box(stripped)
})
});
}
}
criterion::criterion_group!(benches, wincon);
criterion::criterion_main!(benches);

View File

@@ -1,128 +0,0 @@
use std::io::Write;
fn main() -> Result<(), lexopt::Error> {
let args = Args::parse()?;
let stdout = anstream::stdout();
let mut stdout = stdout.lock();
for fixed in 0..16 {
let style = style(fixed, args.layer, args.effects);
let _ = print_number(&mut stdout, fixed, style);
if fixed == 7 || fixed == 15 {
let _ = writeln!(&mut stdout);
}
}
for r in 0..6 {
let _ = writeln!(stdout);
for g in 0..6 {
for b in 0..6 {
let fixed = r * 36 + g * 6 + b + 16;
let style = style(fixed, args.layer, args.effects);
let _ = print_number(&mut stdout, fixed, style);
}
let _ = writeln!(stdout);
}
}
for c in 0..24 {
if 0 == c % 8 {
let _ = writeln!(stdout);
}
let fixed = 232 + c;
let style = style(fixed, args.layer, args.effects);
let _ = print_number(&mut stdout, fixed, style);
}
Ok(())
}
fn style(fixed: u8, layer: Layer, effects: anstyle::Effects) -> anstyle::Style {
let color = anstyle::Ansi256Color(fixed).into();
(match layer {
Layer::Fg => anstyle::Style::new().fg_color(Some(color)),
Layer::Bg => anstyle::Style::new().bg_color(Some(color)),
Layer::Underline => anstyle::Style::new().underline_color(Some(color)),
}) | effects
}
fn print_number(stdout: &mut impl Write, fixed: u8, style: anstyle::Style) -> std::io::Result<()> {
write!(
stdout,
"{}{:>4}{}",
style.render(),
fixed,
anstyle::Reset.render()
)
}
#[derive(Default)]
struct Args {
effects: anstyle::Effects,
layer: Layer,
}
#[derive(Copy, Clone, Default)]
enum Layer {
#[default]
Fg,
Bg,
Underline,
}
impl Args {
fn parse() -> Result<Self, lexopt::Error> {
use lexopt::prelude::*;
let mut res = Args::default();
let mut args = lexopt::Parser::from_env();
while let Some(arg) = args.next()? {
match arg {
Long("layer") => {
res.layer = args.value()?.parse_with(|s| match s {
"fg" => Ok(Layer::Fg),
"bg" => Ok(Layer::Bg),
"underline" => Ok(Layer::Underline),
_ => Err("expected values fg, bg, underline"),
})?;
}
Long("effect") => {
const EFFECTS: [(&str, anstyle::Effects); 12] = [
("bold", anstyle::Effects::BOLD),
("dimmed", anstyle::Effects::DIMMED),
("italic", anstyle::Effects::ITALIC),
("underline", anstyle::Effects::UNDERLINE),
("double_underline", anstyle::Effects::DOUBLE_UNDERLINE),
("curly_underline", anstyle::Effects::CURLY_UNDERLINE),
("dotted_underline", anstyle::Effects::DOTTED_UNDERLINE),
("dashed_underline", anstyle::Effects::DASHED_UNDERLINE),
("blink", anstyle::Effects::BLINK),
("invert", anstyle::Effects::INVERT),
("hidden", anstyle::Effects::HIDDEN),
("strikethrough", anstyle::Effects::STRIKETHROUGH),
];
let effect = args.value()?.parse_with(|s| {
EFFECTS
.into_iter()
.find(|(name, _)| *name == s)
.map(|(_, effect)| effect)
.ok_or_else(|| {
format!(
"expected one of {}",
EFFECTS
.into_iter()
.map(|(n, _)| n)
.collect::<Vec<_>>()
.join(", ")
)
})
})?;
res.effects = res.effects.insert(effect);
}
_ => return Err(arg.unexpected()),
}
}
Ok(res)
}
}

View File

@@ -1,20 +0,0 @@
fn main() {
println!("stdout:");
println!(
" choice: {:?}",
anstream::AutoStream::choice(&std::io::stdout())
);
println!(
" choice: {:?}",
anstream::AutoStream::auto(std::io::stdout()).current_choice()
);
println!("stderr:");
println!(
" choice: {:?}",
anstream::AutoStream::choice(&std::io::stderr())
);
println!(
" choice: {:?}",
anstream::AutoStream::auto(std::io::stderr()).current_choice()
);
}

View File

@@ -1,15 +0,0 @@
//! Gracefully degrade styled output
mod strip;
mod wincon;
pub use strip::strip_bytes;
pub use strip::strip_str;
pub use strip::StripBytes;
pub use strip::StripBytesIter;
pub use strip::StripStr;
pub use strip::StripStrIter;
pub use strip::StrippedBytes;
pub use strip::StrippedStr;
pub use wincon::WinconBytes;
pub use wincon::WinconBytesIter;

View File

@@ -1,513 +0,0 @@
use anstyle_parse::state::state_change;
use anstyle_parse::state::Action;
use anstyle_parse::state::State;
/// Strip ANSI escapes from a `&str`, returning the printable content
///
/// This can be used to take output from a program that includes escape sequences and write it
/// somewhere that does not easily support them, such as a log file.
///
/// For non-contiguous data, see [`StripStr`].
///
/// # Example
///
/// ```rust
/// use std::io::Write as _;
///
/// let styled_text = "\x1b[32mfoo\x1b[m bar";
/// let plain_str = anstream::adapter::strip_str(&styled_text).to_string();
/// assert_eq!(plain_str, "foo bar");
/// ```
#[inline]
pub fn strip_str(data: &str) -> StrippedStr<'_> {
StrippedStr::new(data)
}
/// See [`strip_str`]
#[derive(Default, Clone, Debug, PartialEq, Eq)]
pub struct StrippedStr<'s> {
bytes: &'s [u8],
state: State,
}
impl<'s> StrippedStr<'s> {
#[inline]
fn new(data: &'s str) -> Self {
Self {
bytes: data.as_bytes(),
state: State::Ground,
}
}
/// Create a [`String`] of the printable content
#[inline]
#[allow(clippy::inherent_to_string_shadow_display)] // Single-allocation implementation
pub fn to_string(&self) -> String {
use std::fmt::Write as _;
let mut stripped = String::with_capacity(self.bytes.len());
let _ = write!(&mut stripped, "{}", self);
stripped
}
}
impl<'s> std::fmt::Display for StrippedStr<'s> {
/// **Note:** this does *not* exhaust the [`Iterator`]
#[inline]
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let iter = Self {
bytes: self.bytes,
state: self.state,
};
for printable in iter {
printable.fmt(f)?;
}
Ok(())
}
}
impl<'s> Iterator for StrippedStr<'s> {
type Item = &'s str;
#[inline]
fn next(&mut self) -> Option<Self::Item> {
next_str(&mut self.bytes, &mut self.state)
}
}
/// Incrementally strip non-contiguous data
#[derive(Default, Clone, Debug, PartialEq, Eq)]
pub struct StripStr {
state: State,
}
impl StripStr {
/// Initial state
pub fn new() -> Self {
Default::default()
}
/// Strip the next segment of data
pub fn strip_next<'s>(&'s mut self, data: &'s str) -> StripStrIter<'s> {
StripStrIter {
bytes: data.as_bytes(),
state: &mut self.state,
}
}
}
/// See [`StripStr`]
#[derive(Debug, PartialEq, Eq)]
pub struct StripStrIter<'s> {
bytes: &'s [u8],
state: &'s mut State,
}
impl<'s> Iterator for StripStrIter<'s> {
type Item = &'s str;
#[inline]
fn next(&mut self) -> Option<Self::Item> {
next_str(&mut self.bytes, self.state)
}
}
#[inline]
fn next_str<'s>(bytes: &mut &'s [u8], state: &mut State) -> Option<&'s str> {
let offset = bytes.iter().copied().position(|b| {
let (next_state, action) = state_change(*state, b);
if next_state != State::Anywhere {
*state = next_state;
}
is_printable_str(action, b)
});
let (_, next) = bytes.split_at(offset.unwrap_or(bytes.len()));
*bytes = next;
*state = State::Ground;
let offset = bytes.iter().copied().position(|b| {
let (_next_state, action) = state_change(State::Ground, b);
!is_printable_str(action, b)
});
let (printable, next) = bytes.split_at(offset.unwrap_or(bytes.len()));
*bytes = next;
if printable.is_empty() {
None
} else {
let printable = unsafe {
from_utf8_unchecked(
printable,
"`bytes` was validated as UTF-8, the parser preserves UTF-8 continuations",
)
};
Some(printable)
}
}
#[inline]
unsafe fn from_utf8_unchecked<'b>(bytes: &'b [u8], safety_justification: &'static str) -> &'b str {
if cfg!(debug_assertions) {
// Catch problems more quickly when testing
std::str::from_utf8(bytes).expect(safety_justification)
} else {
std::str::from_utf8_unchecked(bytes)
}
}
#[inline]
fn is_printable_str(action: Action, byte: u8) -> bool {
// VT320 considered 0x7f to be `Print`able but we expect to be working in UTF-8 systems and not
// ISO Latin-1, making it DEL and non-printable
const DEL: u8 = 0x7f;
(action == Action::Print && byte != DEL)
|| action == Action::BeginUtf8
// since we know the input is valid UTF-8, the only thing we can do with
// continuations is to print them
|| is_utf8_continuation(byte)
|| (action == Action::Execute && byte.is_ascii_whitespace())
}
#[inline]
fn is_utf8_continuation(b: u8) -> bool {
matches!(b, 0x80..=0xbf)
}
/// Strip ANSI escapes from bytes, returning the printable content
///
/// This can be used to take output from a program that includes escape sequences and write it
/// somewhere that does not easily support them, such as a log file.
///
/// # Example
///
/// ```rust
/// use std::io::Write as _;
///
/// let styled_text = "\x1b[32mfoo\x1b[m bar";
/// let plain_str = anstream::adapter::strip_bytes(styled_text.as_bytes()).into_vec();
/// assert_eq!(plain_str.as_slice(), &b"foo bar"[..]);
/// ```
#[inline]
pub fn strip_bytes(data: &[u8]) -> StrippedBytes<'_> {
StrippedBytes::new(data)
}
/// See [`strip_bytes`]
#[derive(Default, Clone, Debug, PartialEq, Eq)]
pub struct StrippedBytes<'s> {
bytes: &'s [u8],
state: State,
utf8parser: Utf8Parser,
}
impl<'s> StrippedBytes<'s> {
/// See [`strip_bytes`]
#[inline]
pub fn new(bytes: &'s [u8]) -> Self {
Self {
bytes,
state: State::Ground,
utf8parser: Default::default(),
}
}
/// Strip the next slice of bytes
///
/// Used when the content is in several non-contiguous slices
///
/// # Panic
///
/// May panic if it is not exhausted / empty
#[inline]
pub fn extend(&mut self, bytes: &'s [u8]) {
debug_assert!(
self.is_empty(),
"current bytes must be processed to ensure we end at the right state"
);
self.bytes = bytes;
}
/// Report the bytes has been exhausted
#[inline]
pub fn is_empty(&self) -> bool {
self.bytes.is_empty()
}
/// Create a [`Vec`] of the printable content
#[inline]
pub fn into_vec(self) -> Vec<u8> {
let mut stripped = Vec::with_capacity(self.bytes.len());
for printable in self {
stripped.extend(printable);
}
stripped
}
}
impl<'s> Iterator for StrippedBytes<'s> {
type Item = &'s [u8];
#[inline]
fn next(&mut self) -> Option<Self::Item> {
next_bytes(&mut self.bytes, &mut self.state, &mut self.utf8parser)
}
}
/// Incrementally strip non-contiguous data
#[derive(Default, Clone, Debug, PartialEq, Eq)]
pub struct StripBytes {
state: State,
utf8parser: Utf8Parser,
}
impl StripBytes {
/// Initial state
pub fn new() -> Self {
Default::default()
}
/// Strip the next segment of data
pub fn strip_next<'s>(&'s mut self, bytes: &'s [u8]) -> StripBytesIter<'s> {
StripBytesIter {
bytes,
state: &mut self.state,
utf8parser: &mut self.utf8parser,
}
}
}
/// See [`StripBytes`]
#[derive(Debug, PartialEq, Eq)]
pub struct StripBytesIter<'s> {
bytes: &'s [u8],
state: &'s mut State,
utf8parser: &'s mut Utf8Parser,
}
impl<'s> Iterator for StripBytesIter<'s> {
type Item = &'s [u8];
#[inline]
fn next(&mut self) -> Option<Self::Item> {
next_bytes(&mut self.bytes, self.state, self.utf8parser)
}
}
#[inline]
fn next_bytes<'s>(
bytes: &mut &'s [u8],
state: &mut State,
utf8parser: &mut Utf8Parser,
) -> Option<&'s [u8]> {
let offset = bytes.iter().copied().position(|b| {
if *state == State::Utf8 {
true
} else {
let (next_state, action) = state_change(*state, b);
if next_state != State::Anywhere {
*state = next_state;
}
is_printable_bytes(action, b)
}
});
let (_, next) = bytes.split_at(offset.unwrap_or(bytes.len()));
*bytes = next;
let offset = bytes.iter().copied().position(|b| {
if *state == State::Utf8 {
if utf8parser.add(b) {
*state = State::Ground;
}
false
} else {
let (next_state, action) = state_change(State::Ground, b);
if next_state != State::Anywhere {
*state = next_state;
}
if *state == State::Utf8 {
utf8parser.add(b);
false
} else {
!is_printable_bytes(action, b)
}
}
});
let (printable, next) = bytes.split_at(offset.unwrap_or(bytes.len()));
*bytes = next;
if printable.is_empty() {
None
} else {
Some(printable)
}
}
#[derive(Default, Clone, Debug, PartialEq, Eq)]
pub struct Utf8Parser {
utf8_parser: utf8parse::Parser,
}
impl Utf8Parser {
fn add(&mut self, byte: u8) -> bool {
let mut b = false;
let mut receiver = VtUtf8Receiver(&mut b);
self.utf8_parser.advance(&mut receiver, byte);
b
}
}
struct VtUtf8Receiver<'a>(&'a mut bool);
impl<'a> utf8parse::Receiver for VtUtf8Receiver<'a> {
fn codepoint(&mut self, _: char) {
*self.0 = true;
}
fn invalid_sequence(&mut self) {
*self.0 = true;
}
}
#[inline]
fn is_printable_bytes(action: Action, byte: u8) -> bool {
// VT320 considered 0x7f to be `Print`able but we expect to be working in UTF-8 systems and not
// ISO Latin-1, making it DEL and non-printable
const DEL: u8 = 0x7f;
// Continuations aren't included as they may also be control codes, requiring more context
(action == Action::Print && byte != DEL)
|| action == Action::BeginUtf8
|| (action == Action::Execute && byte.is_ascii_whitespace())
}
#[cfg(test)]
mod test {
use super::*;
use proptest::prelude::*;
/// Model based off full parser
fn parser_strip(bytes: &[u8]) -> String {
#[derive(Default)]
struct Strip(String);
impl Strip {
fn with_capacity(capacity: usize) -> Self {
Self(String::with_capacity(capacity))
}
}
impl anstyle_parse::Perform for Strip {
fn print(&mut self, c: char) {
self.0.push(c);
}
fn execute(&mut self, byte: u8) {
if byte.is_ascii_whitespace() {
self.0.push(byte as char);
}
}
}
let mut stripped = Strip::with_capacity(bytes.len());
let mut parser = anstyle_parse::Parser::<anstyle_parse::DefaultCharAccumulator>::new();
for byte in bytes {
parser.advance(&mut stripped, *byte);
}
stripped.0
}
/// Model verifying incremental parsing
fn strip_char(mut s: &str) -> String {
let mut result = String::new();
let mut state = StripStr::new();
while !s.is_empty() {
let mut indices = s.char_indices();
indices.next(); // current
let offset = indices.next().map(|(i, _)| i).unwrap_or_else(|| s.len());
let (current, remainder) = s.split_at(offset);
for printable in state.strip_next(current) {
result.push_str(printable);
}
s = remainder;
}
result
}
/// Model verifying incremental parsing
fn strip_byte(s: &[u8]) -> Vec<u8> {
let mut result = Vec::new();
let mut state = StripBytes::default();
for start in 0..s.len() {
let current = &s[start..=start];
for printable in state.strip_next(current) {
result.extend(printable);
}
}
result
}
#[test]
fn test_strip_bytes_multibyte() {
let bytes = [240, 145, 141, 139];
let expected = parser_strip(&bytes);
let actual = String::from_utf8(strip_bytes(&bytes).into_vec()).unwrap();
assert_eq!(expected, actual);
}
#[test]
fn test_strip_byte_multibyte() {
let bytes = [240, 145, 141, 139];
let expected = parser_strip(&bytes);
let actual = String::from_utf8(strip_byte(&bytes).to_vec()).unwrap();
assert_eq!(expected, actual);
}
#[test]
fn test_strip_str_del() {
let input = std::str::from_utf8(&[0x7f]).unwrap();
let expected = "";
let actual = strip_str(input).to_string();
assert_eq!(expected, actual);
}
#[test]
fn test_strip_byte_del() {
let bytes = [0x7f];
let expected = "";
let actual = String::from_utf8(strip_byte(&bytes).to_vec()).unwrap();
assert_eq!(expected, actual);
}
proptest! {
#[test]
#[cfg_attr(miri, ignore)] // See https://github.com/AltSysrq/proptest/issues/253
fn strip_str_no_escapes(s in "\\PC*") {
let expected = parser_strip(s.as_bytes());
let actual = strip_str(&s).to_string();
assert_eq!(expected, actual);
}
#[test]
#[cfg_attr(miri, ignore)] // See https://github.com/AltSysrq/proptest/issues/253
fn strip_char_no_escapes(s in "\\PC*") {
let expected = parser_strip(s.as_bytes());
let actual = strip_char(&s);
assert_eq!(expected, actual);
}
#[test]
#[cfg_attr(miri, ignore)] // See https://github.com/AltSysrq/proptest/issues/253
fn strip_bytes_no_escapes(s in "\\PC*") {
dbg!(&s);
dbg!(s.as_bytes());
let expected = parser_strip(s.as_bytes());
let actual = String::from_utf8(strip_bytes(s.as_bytes()).into_vec()).unwrap();
assert_eq!(expected, actual);
}
#[test]
#[cfg_attr(miri, ignore)] // See https://github.com/AltSysrq/proptest/issues/253
fn strip_byte_no_escapes(s in "\\PC*") {
dbg!(&s);
dbg!(s.as_bytes());
let expected = parser_strip(s.as_bytes());
let actual = String::from_utf8(strip_byte(s.as_bytes()).to_vec()).unwrap();
assert_eq!(expected, actual);
}
}
}

View File

@@ -1,320 +0,0 @@
/// Incrementally convert to wincon calls for non-contiguous data
#[derive(Default, Clone, Debug, PartialEq, Eq)]
pub struct WinconBytes {
parser: anstyle_parse::Parser,
capture: WinconCapture,
}
impl WinconBytes {
/// Initial state
pub fn new() -> Self {
Default::default()
}
/// Strip the next segment of data
pub fn extract_next<'s>(&'s mut self, bytes: &'s [u8]) -> WinconBytesIter<'s> {
self.capture.reset();
self.capture.printable.reserve(bytes.len());
WinconBytesIter {
bytes,
parser: &mut self.parser,
capture: &mut self.capture,
}
}
}
/// See [`WinconBytes`]
#[derive(Debug, PartialEq, Eq)]
pub struct WinconBytesIter<'s> {
bytes: &'s [u8],
parser: &'s mut anstyle_parse::Parser,
capture: &'s mut WinconCapture,
}
impl<'s> Iterator for WinconBytesIter<'s> {
type Item = (anstyle::Style, String);
#[inline]
fn next(&mut self) -> Option<Self::Item> {
next_bytes(&mut self.bytes, self.parser, self.capture)
}
}
#[inline]
fn next_bytes(
bytes: &mut &[u8],
parser: &mut anstyle_parse::Parser,
capture: &mut WinconCapture,
) -> Option<(anstyle::Style, String)> {
capture.reset();
while capture.ready.is_none() {
let byte = if let Some((byte, remainder)) = (*bytes).split_first() {
*bytes = remainder;
*byte
} else {
break;
};
parser.advance(capture, byte);
}
if capture.printable.is_empty() {
return None;
}
let style = capture.ready.unwrap_or(capture.style);
Some((style, std::mem::take(&mut capture.printable)))
}
#[derive(Default, Clone, Debug, PartialEq, Eq)]
struct WinconCapture {
style: anstyle::Style,
printable: String,
ready: Option<anstyle::Style>,
}
impl WinconCapture {
fn reset(&mut self) {
self.ready = None;
}
}
impl anstyle_parse::Perform for WinconCapture {
/// Draw a character to the screen and update states.
fn print(&mut self, c: char) {
self.printable.push(c);
}
/// Execute a C0 or C1 control function.
fn execute(&mut self, byte: u8) {
if byte.is_ascii_whitespace() {
self.printable.push(byte as char);
}
}
fn csi_dispatch(
&mut self,
params: &anstyle_parse::Params,
_intermediates: &[u8],
ignore: bool,
action: u8,
) {
if ignore {
return;
}
if action != b'm' {
return;
}
let mut style = self.style;
// param/value differences are dependent on the escape code
let mut state = State::Normal;
let mut r = None;
let mut g = None;
let mut is_bg = false;
for param in params {
for value in param {
match (state, *value) {
(State::Normal, 0) => {
style = anstyle::Style::default();
break;
}
(State::Normal, 1) => {
style = style.bold();
break;
}
(State::Normal, 4) => {
style = style.underline();
break;
}
(State::Normal, 30..=37) => {
let color = to_ansi_color(value - 30).unwrap();
style = style.fg_color(Some(color.into()));
break;
}
(State::Normal, 38) => {
is_bg = false;
state = State::PrepareCustomColor;
}
(State::Normal, 39) => {
style = style.fg_color(None);
break;
}
(State::Normal, 40..=47) => {
let color = to_ansi_color(value - 40).unwrap();
style = style.bg_color(Some(color.into()));
break;
}
(State::Normal, 48) => {
is_bg = true;
state = State::PrepareCustomColor;
}
(State::Normal, 49) => {
style = style.bg_color(None);
break;
}
(State::Normal, 90..=97) => {
let color = to_ansi_color(value - 90).unwrap().bright(true);
style = style.fg_color(Some(color.into()));
break;
}
(State::Normal, 100..=107) => {
let color = to_ansi_color(value - 100).unwrap().bright(true);
style = style.bg_color(Some(color.into()));
break;
}
(State::PrepareCustomColor, 5) => {
state = State::Ansi256;
}
(State::PrepareCustomColor, 2) => {
state = State::Rgb;
r = None;
g = None;
}
(State::Ansi256, n) => {
let color = anstyle::Ansi256Color(n as u8);
if is_bg {
style = style.bg_color(Some(color.into()));
} else {
style = style.fg_color(Some(color.into()));
}
break;
}
(State::Rgb, b) => match (r, g) {
(None, _) => {
r = Some(b);
}
(Some(_), None) => {
g = Some(b);
}
(Some(r), Some(g)) => {
let color = anstyle::RgbColor(r as u8, g as u8, b as u8);
if is_bg {
style = style.bg_color(Some(color.into()));
} else {
style = style.fg_color(Some(color.into()));
}
break;
}
},
_ => {
break;
}
}
}
}
if style != self.style && !self.printable.is_empty() {
self.ready = Some(self.style);
}
self.style = style;
}
}
#[derive(Copy, Clone, PartialEq, Eq, Debug)]
enum State {
Normal,
PrepareCustomColor,
Ansi256,
Rgb,
}
fn to_ansi_color(digit: u16) -> Option<anstyle::AnsiColor> {
match digit {
0 => Some(anstyle::AnsiColor::Black),
1 => Some(anstyle::AnsiColor::Red),
2 => Some(anstyle::AnsiColor::Green),
3 => Some(anstyle::AnsiColor::Yellow),
4 => Some(anstyle::AnsiColor::Blue),
5 => Some(anstyle::AnsiColor::Magenta),
6 => Some(anstyle::AnsiColor::Cyan),
7 => Some(anstyle::AnsiColor::White),
_ => None,
}
}
#[cfg(test)]
mod test {
use super::*;
use owo_colors::OwoColorize as _;
use proptest::prelude::*;
#[track_caller]
fn verify(input: &str, expected: Vec<(anstyle::Style, &str)>) {
let expected = expected
.into_iter()
.map(|(style, value)| (style, value.to_owned()))
.collect::<Vec<_>>();
let mut state = WinconBytes::new();
let actual = state.extract_next(input.as_bytes()).collect::<Vec<_>>();
assert_eq!(expected, actual, "{input:?}");
}
#[test]
fn start() {
let input = format!("{} world!", "Hello".green().on_red());
let expected = vec![
(
anstyle::AnsiColor::Green.on(anstyle::AnsiColor::Red),
"Hello",
),
(anstyle::Style::default(), " world!"),
];
verify(&input, expected);
}
#[test]
fn middle() {
let input = format!("Hello {}!", "world".green().on_red());
let expected = vec![
(anstyle::Style::default(), "Hello "),
(
anstyle::AnsiColor::Green.on(anstyle::AnsiColor::Red),
"world",
),
(anstyle::Style::default(), "!"),
];
verify(&input, expected);
}
#[test]
fn end() {
let input = format!("Hello {}", "world!".green().on_red());
let expected = vec![
(anstyle::Style::default(), "Hello "),
(
anstyle::AnsiColor::Green.on(anstyle::AnsiColor::Red),
"world!",
),
];
verify(&input, expected);
}
#[test]
fn ansi256_colors() {
// termcolor only supports "brights" via these
let input = format!(
"Hello {}!",
"world".color(owo_colors::XtermColors::UserBrightYellow)
);
let expected = vec![
(anstyle::Style::default(), "Hello "),
(anstyle::Ansi256Color(11).on_default(), "world"),
(anstyle::Style::default(), "!"),
];
verify(&input, expected);
}
proptest! {
#[test]
#[cfg_attr(miri, ignore)] // See https://github.com/AltSysrq/proptest/issues/253
fn wincon_no_escapes(s in "\\PC*") {
let expected = if s.is_empty() {
vec![]
} else {
vec![(anstyle::Style::default(), s.clone())]
};
let mut state = WinconBytes::new();
let actual = state.extract_next(s.as_bytes()).collect::<Vec<_>>();
assert_eq!(expected, actual);
}
}
}

View File

@@ -1,263 +0,0 @@
use crate::stream::AsLockedWrite;
use crate::stream::RawStream;
#[cfg(feature = "auto")]
use crate::ColorChoice;
use crate::StripStream;
#[cfg(all(windows, feature = "wincon"))]
use crate::WinconStream;
/// [`std::io::Write`] that adapts ANSI escape codes to the underlying `Write`s capabilities
#[derive(Debug)]
pub struct AutoStream<S: RawStream> {
inner: StreamInner<S>,
}
#[derive(Debug)]
enum StreamInner<S: RawStream> {
PassThrough(S),
Strip(StripStream<S>),
#[cfg(all(windows, feature = "wincon"))]
Wincon(WinconStream<S>),
}
impl<S> AutoStream<S>
where
S: RawStream,
{
/// Runtime control over styling behavior
#[cfg(feature = "auto")]
#[inline]
pub fn new(raw: S, choice: ColorChoice) -> Self {
match choice {
ColorChoice::Auto => Self::auto(raw),
ColorChoice::AlwaysAnsi => Self::always_ansi(raw),
ColorChoice::Always => Self::always(raw),
ColorChoice::Never => Self::never(raw),
}
}
/// Auto-adapt for the stream's capabilities
#[cfg(feature = "auto")]
#[inline]
pub fn auto(raw: S) -> Self {
let choice = Self::choice(&raw);
debug_assert_ne!(choice, ColorChoice::Auto);
Self::new(raw, choice)
}
/// Report the desired choice for the given stream
#[cfg(feature = "auto")]
pub fn choice(raw: &S) -> ColorChoice {
choice(raw)
}
/// Force ANSI escape codes to be passed through as-is, no matter what the inner `Write`
/// supports.
#[inline]
pub fn always_ansi(raw: S) -> Self {
#[cfg(feature = "auto")]
{
if raw.is_terminal() {
let _ = anstyle_query::windows::enable_ansi_colors();
}
}
Self::always_ansi_(raw)
}
#[inline]
fn always_ansi_(raw: S) -> Self {
let inner = StreamInner::PassThrough(raw);
AutoStream { inner }
}
/// Force color, no matter what the inner `Write` supports.
#[inline]
pub fn always(raw: S) -> Self {
if cfg!(windows) {
#[cfg(feature = "auto")]
let use_wincon = raw.is_terminal()
&& !anstyle_query::windows::enable_ansi_colors().unwrap_or(true)
&& !anstyle_query::term_supports_ansi_color();
#[cfg(not(feature = "auto"))]
let use_wincon = true;
if use_wincon {
Self::wincon(raw).unwrap_or_else(|raw| Self::always_ansi_(raw))
} else {
Self::always_ansi_(raw)
}
} else {
Self::always_ansi(raw)
}
}
/// Only pass printable data to the inner `Write`.
#[inline]
pub fn never(raw: S) -> Self {
let inner = StreamInner::Strip(StripStream::new(raw));
AutoStream { inner }
}
#[inline]
fn wincon(raw: S) -> Result<Self, S> {
#[cfg(all(windows, feature = "wincon"))]
{
Ok(Self {
inner: StreamInner::Wincon(WinconStream::new(raw)),
})
}
#[cfg(not(all(windows, feature = "wincon")))]
{
Err(raw)
}
}
/// Get the wrapped [`RawStream`]
#[inline]
pub fn into_inner(self) -> S {
match self.inner {
StreamInner::PassThrough(w) => w,
StreamInner::Strip(w) => w.into_inner(),
#[cfg(all(windows, feature = "wincon"))]
StreamInner::Wincon(w) => w.into_inner(),
}
}
#[inline]
pub fn is_terminal(&self) -> bool {
match &self.inner {
StreamInner::PassThrough(w) => w.is_terminal(),
StreamInner::Strip(w) => w.is_terminal(),
#[cfg(all(windows, feature = "wincon"))]
StreamInner::Wincon(_) => true, // its only ever a terminal
}
}
/// Prefer [`AutoStream::choice`]
///
/// This doesn't report what is requested but what is currently active.
#[inline]
#[cfg(feature = "auto")]
pub fn current_choice(&self) -> ColorChoice {
match &self.inner {
StreamInner::PassThrough(_) => ColorChoice::AlwaysAnsi,
StreamInner::Strip(_) => ColorChoice::Never,
#[cfg(all(windows, feature = "wincon"))]
StreamInner::Wincon(_) => ColorChoice::Always,
}
}
}
#[cfg(feature = "auto")]
fn choice(raw: &dyn RawStream) -> ColorChoice {
let choice = ColorChoice::global();
match choice {
ColorChoice::Auto => {
let clicolor = anstyle_query::clicolor();
let clicolor_enabled = clicolor.unwrap_or(false);
let clicolor_disabled = !clicolor.unwrap_or(true);
if raw.is_terminal()
&& !anstyle_query::no_color()
&& !clicolor_disabled
&& (anstyle_query::term_supports_color()
|| clicolor_enabled
|| anstyle_query::is_ci())
|| anstyle_query::clicolor_force()
{
ColorChoice::Always
} else {
ColorChoice::Never
}
}
ColorChoice::AlwaysAnsi | ColorChoice::Always | ColorChoice::Never => choice,
}
}
impl AutoStream<std::io::Stdout> {
/// Get exclusive access to the `AutoStream`
///
/// Why?
/// - Faster performance when writing in a loop
/// - Avoid other threads interleaving output with the current thread
#[inline]
pub fn lock(self) -> AutoStream<std::io::StdoutLock<'static>> {
let inner = match self.inner {
StreamInner::PassThrough(w) => StreamInner::PassThrough(w.lock()),
StreamInner::Strip(w) => StreamInner::Strip(w.lock()),
#[cfg(all(windows, feature = "wincon"))]
StreamInner::Wincon(w) => StreamInner::Wincon(w.lock()),
};
AutoStream { inner }
}
}
impl AutoStream<std::io::Stderr> {
/// Get exclusive access to the `AutoStream`
///
/// Why?
/// - Faster performance when writing in a loop
/// - Avoid other threads interleaving output with the current thread
#[inline]
pub fn lock(self) -> AutoStream<std::io::StderrLock<'static>> {
let inner = match self.inner {
StreamInner::PassThrough(w) => StreamInner::PassThrough(w.lock()),
StreamInner::Strip(w) => StreamInner::Strip(w.lock()),
#[cfg(all(windows, feature = "wincon"))]
StreamInner::Wincon(w) => StreamInner::Wincon(w.lock()),
};
AutoStream { inner }
}
}
impl<S> std::io::Write for AutoStream<S>
where
S: RawStream + AsLockedWrite,
{
// Must forward all calls to ensure locking happens appropriately
#[inline]
fn write(&mut self, buf: &[u8]) -> std::io::Result<usize> {
match &mut self.inner {
StreamInner::PassThrough(w) => w.as_locked_write().write(buf),
StreamInner::Strip(w) => w.write(buf),
#[cfg(all(windows, feature = "wincon"))]
StreamInner::Wincon(w) => w.write(buf),
}
}
#[inline]
fn write_vectored(&mut self, bufs: &[std::io::IoSlice<'_>]) -> std::io::Result<usize> {
match &mut self.inner {
StreamInner::PassThrough(w) => w.as_locked_write().write_vectored(bufs),
StreamInner::Strip(w) => w.write_vectored(bufs),
#[cfg(all(windows, feature = "wincon"))]
StreamInner::Wincon(w) => w.write_vectored(bufs),
}
}
// is_write_vectored: nightly only
#[inline]
fn flush(&mut self) -> std::io::Result<()> {
match &mut self.inner {
StreamInner::PassThrough(w) => w.as_locked_write().flush(),
StreamInner::Strip(w) => w.flush(),
#[cfg(all(windows, feature = "wincon"))]
StreamInner::Wincon(w) => w.flush(),
}
}
#[inline]
fn write_all(&mut self, buf: &[u8]) -> std::io::Result<()> {
match &mut self.inner {
StreamInner::PassThrough(w) => w.as_locked_write().write_all(buf),
StreamInner::Strip(w) => w.write_all(buf),
#[cfg(all(windows, feature = "wincon"))]
StreamInner::Wincon(w) => w.write_all(buf),
}
}
// write_all_vectored: nightly only
#[inline]
fn write_fmt(&mut self, args: std::fmt::Arguments<'_>) -> std::io::Result<()> {
match &mut self.inner {
StreamInner::PassThrough(w) => w.as_locked_write().write_fmt(args),
StreamInner::Strip(w) => w.write_fmt(args),
#[cfg(all(windows, feature = "wincon"))]
StreamInner::Wincon(w) => w.write_fmt(args),
}
}
}

View File

@@ -1,68 +0,0 @@
#![allow(deprecated)]
/// In-memory [`RawStream`][crate::stream::RawStream]
#[derive(Clone, Default, Debug, PartialEq, Eq)]
#[deprecated(since = "0.6.2", note = "Use Vec")]
#[doc(hidden)]
pub struct Buffer(Vec<u8>);
impl Buffer {
#[inline]
pub fn new() -> Self {
Default::default()
}
#[inline]
pub fn with_capacity(capacity: usize) -> Self {
Self(Vec::with_capacity(capacity))
}
#[inline]
pub fn as_bytes(&self) -> &[u8] {
&self.0
}
}
impl AsRef<[u8]> for Buffer {
#[inline]
fn as_ref(&self) -> &[u8] {
self.as_bytes()
}
}
impl std::io::Write for Buffer {
#[inline]
fn write(&mut self, buf: &[u8]) -> std::io::Result<usize> {
self.0.extend(buf);
Ok(buf.len())
}
#[inline]
fn flush(&mut self) -> std::io::Result<()> {
Ok(())
}
}
#[cfg(all(windows, feature = "wincon"))]
impl anstyle_wincon::WinconStream for Buffer {
fn write_colored(
&mut self,
fg: Option<anstyle::AnsiColor>,
bg: Option<anstyle::AnsiColor>,
data: &[u8],
) -> std::io::Result<usize> {
self.0.write_colored(fg, bg, data)
}
}
#[cfg(all(windows, feature = "wincon"))]
impl anstyle_wincon::WinconStream for &'_ mut Buffer {
fn write_colored(
&mut self,
fg: Option<anstyle::AnsiColor>,
bg: Option<anstyle::AnsiColor>,
data: &[u8],
) -> std::io::Result<usize> {
(**self).write_colored(fg, bg, data)
}
}

View File

@@ -1,54 +0,0 @@
/// A shim which allows a [`std::io::Write`] to be implemented in terms of a [`std::fmt::Write`]
///
/// This saves off I/O errors. instead of discarding them
pub(crate) struct Adapter<W>
where
W: FnMut(&[u8]) -> std::io::Result<()>,
{
writer: W,
error: std::io::Result<()>,
}
impl<W> Adapter<W>
where
W: FnMut(&[u8]) -> std::io::Result<()>,
{
pub(crate) fn new(writer: W) -> Self {
Adapter {
writer,
error: Ok(()),
}
}
pub(crate) fn write_fmt(mut self, fmt: std::fmt::Arguments<'_>) -> std::io::Result<()> {
match std::fmt::write(&mut self, fmt) {
Ok(()) => Ok(()),
Err(..) => {
// check if the error came from the underlying `Write` or not
if self.error.is_err() {
self.error
} else {
Err(std::io::Error::new(
std::io::ErrorKind::Other,
"formatter error",
))
}
}
}
}
}
impl<W> std::fmt::Write for Adapter<W>
where
W: FnMut(&[u8]) -> std::io::Result<()>,
{
fn write_str(&mut self, s: &str) -> std::fmt::Result {
match (self.writer)(s.as_bytes()) {
Ok(()) => Ok(()),
Err(e) => {
self.error = Err(e);
Err(std::fmt::Error)
}
}
}
}

View File

@@ -1,79 +0,0 @@
//! **Auto-adapting [`stdout`] / [`stderr`] streams**
//!
//! *A portmanteau of "ansi stream"*
//!
//! [`AutoStream`] always accepts [ANSI escape codes](https://en.wikipedia.org/wiki/ANSI_escape_code),
//! adapting to the user's terminal's capabilities.
//!
//! Benefits
//! - Allows the caller to not be concerned with the terminal's capabilities
//! - Semver safe way of passing styled text between crates as ANSI escape codes offer more
//! compatibility than most crate APIs.
//!
//! Available styling crates:
//! - [anstyle](https://docs.rs/anstyle) for minimal runtime styling, designed to go in public APIs
//! (once it hits 1.0)
//! - [owo-colors](https://docs.rs/owo-colors) for feature-rich runtime styling
//! - [color-print](https://docs.rs/color-print) for feature-rich compile-time styling
//!
//! # Example
//!
//! ```
//! # #[cfg(feature = "auto")] {
//! use anstream::println;
//! use owo_colors::OwoColorize as _;
//!
//! // Foreground colors
//! println!("My number is {:#x}!", 10.green());
//! // Background colors
//! println!("My number is not {}!", 4.on_red());
//! # }
//! ```
//!
//! And this will correctly handle piping to a file, etc
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
pub mod adapter;
pub mod stream;
mod buffer;
#[macro_use]
mod macros;
mod auto;
mod fmt;
mod strip;
#[cfg(all(windows, feature = "wincon"))]
mod wincon;
pub use auto::AutoStream;
pub use strip::StripStream;
#[cfg(all(windows, feature = "wincon"))]
pub use wincon::WinconStream;
#[allow(deprecated)]
pub use buffer::Buffer;
/// Create an ANSI escape code compatible stdout
///
/// **Note:** Call [`AutoStream::lock`] in loops to avoid the performance hit of acquiring/releasing
/// from the implicit locking in each [`std::io::Write`] call
#[cfg(feature = "auto")]
pub fn stdout() -> AutoStream<std::io::Stdout> {
let stdout = std::io::stdout();
AutoStream::auto(stdout)
}
/// Create an ANSI escape code compatible stderr
///
/// **Note:** Call [`AutoStream::lock`] in loops to avoid the performance hit of acquiring/releasing
/// from the implicit locking in each [`std::io::Write`] call
#[cfg(feature = "auto")]
pub fn stderr() -> AutoStream<std::io::Stderr> {
let stderr = std::io::stderr();
AutoStream::auto(stderr)
}
/// Selection for overriding color output
#[cfg(feature = "auto")]
pub use colorchoice::ColorChoice;

View File

@@ -1,389 +0,0 @@
/// Prints to [`stdout`][crate::stdout].
///
/// Equivalent to the [`println!`] macro except that a newline is not printed at
/// the end of the message.
///
/// Note that stdout is frequently line-buffered by default so it may be
/// necessary to use [`std::io::Write::flush()`] to ensure the output is emitted
/// immediately.
///
/// **NOTE:** The `print!` macro will lock the standard output on each call. If you call
/// `print!` within a hot loop, this behavior may be the bottleneck of the loop.
/// To avoid this, lock stdout with [`AutoStream::lock`][crate::AutoStream::lock]:
/// ```
/// # #[cfg(feature = "auto")] {
/// use std::io::Write as _;
///
/// let mut lock = anstream::stdout().lock();
/// write!(lock, "hello world").unwrap();
/// # }
/// ```
///
/// Use `print!` only for the primary output of your program. Use
/// [`eprint!`] instead to print error and progress messages.
///
/// **NOTE:** Not all `print!` calls will be captured in tests like [`std::print!`]
/// - Capturing will automatically be activated in test binaries
/// - Otherwise, only when the `test` feature is enabled
///
/// # Panics
///
/// Panics if writing to `stdout` fails for any reason **except** broken pipe.
///
/// Writing to non-blocking stdout can cause an error, which will lead
/// this macro to panic.
///
/// # Examples
///
/// ```
/// # #[cfg(feature = "auto")] {
/// use std::io::Write as _;
/// use anstream::print;
/// use anstream::stdout;
///
/// print!("this ");
/// print!("will ");
/// print!("be ");
/// print!("on ");
/// print!("the ");
/// print!("same ");
/// print!("line ");
///
/// stdout().flush().unwrap();
///
/// print!("this string has a newline, why not choose println! instead?\n");
///
/// stdout().flush().unwrap();
/// # }
/// ```
#[cfg(feature = "auto")]
#[macro_export]
macro_rules! print {
($($arg:tt)*) => {{
if cfg!(any(feature = "test", test)) {
use std::io::Write as _;
let stdio = std::io::stdout();
let choice = $crate::AutoStream::choice(&stdio);
let buffer = Vec::new();
let mut stream = $crate::AutoStream::new(buffer, choice);
// Ignore errors rather than panic
let _ = ::std::write!(&mut stream, $($arg)*);
let buffer = stream.into_inner();
// Should be UTF-8 but not wanting to panic
let buffer = String::from_utf8_lossy(&buffer);
::std::print!("{}", buffer)
} else {
use std::io::Write as _;
let mut stream = $crate::stdout();
match ::std::write!(&mut stream, $($arg)*) {
Err(e) if e.kind() != ::std::io::ErrorKind::BrokenPipe => {
::std::panic!("failed printing to stdout: {e}");
}
Err(_) | Ok(_) => {}
}
}
}};
}
/// Prints to [`stdout`][crate::stdout], with a newline.
///
/// On all platforms, the newline is the LINE FEED character (`\n`/`U+000A`) alone
/// (no additional CARRIAGE RETURN (`\r`/`U+000D`)).
///
/// This macro uses the same syntax as [`format!`], but writes to the standard output instead.
/// See [`std::fmt`] for more information.
///
/// **NOTE:** The `println!` macro will lock the standard output on each call. If you call
/// `println!` within a hot loop, this behavior may be the bottleneck of the loop.
/// To avoid this, lock stdout with [`AutoStream::lock`][crate::AutoStream::lock]:
/// ```
/// # #[cfg(feature = "auto")] {
/// use std::io::Write as _;
///
/// let mut lock = anstream::stdout().lock();
/// writeln!(lock, "hello world").unwrap();
/// # }
/// ```
///
/// Use `println!` only for the primary output of your program. Use
/// [`eprintln!`] instead to print error and progress messages.
///
/// **NOTE:** Not all `println!` calls will be captured in tests like [`std::println!`]
/// - Capturing will automatically be activated in test binaries
/// - Otherwise, only when the `test` feature is enabled
///
/// # Panics
///
/// Panics if writing to `stdout` fails for any reason **except** broken pipe.
///
/// Writing to non-blocking stdout can cause an error, which will lead
/// this macro to panic.
///
/// # Examples
///
/// ```
/// # #[cfg(feature = "auto")] {
/// use anstream::println;
///
/// println!(); // prints just a newline
/// println!("hello there!");
/// println!("format {} arguments", "some");
/// let local_variable = "some";
/// println!("format {local_variable} arguments");
/// # }
/// ```
#[cfg(feature = "auto")]
#[macro_export]
macro_rules! println {
() => {
$crate::print!("\n")
};
($($arg:tt)*) => {{
if cfg!(any(feature = "test", test)) {
use std::io::Write as _;
let stdio = std::io::stdout();
let choice = $crate::AutoStream::choice(&stdio);
let buffer = Vec::new();
let mut stream = $crate::AutoStream::new(buffer, choice);
// Ignore errors rather than panic
let _ = ::std::write!(&mut stream, $($arg)*);
let buffer = stream.into_inner();
// Should be UTF-8 but not wanting to panic
let buffer = String::from_utf8_lossy(&buffer);
::std::println!("{}", buffer)
} else {
use std::io::Write as _;
let mut stream = $crate::stdout();
match ::std::writeln!(&mut stream, $($arg)*) {
Err(e) if e.kind() != ::std::io::ErrorKind::BrokenPipe => {
::std::panic!("failed printing to stdout: {e}");
}
Err(_) | Ok(_) => {}
}
}
}};
}
/// Prints to [`stderr`][crate::stderr].
///
/// Equivalent to the [`print!`] macro, except that output goes to
/// `stderr` instead of `stdout`. See [`print!`] for
/// example usage.
///
/// Use `eprint!` only for error and progress messages. Use `print!`
/// instead for the primary output of your program.
///
/// **NOTE:** Not all `eprint!` calls will be captured in tests like [`std::eprint!`]
/// - Capturing will automatically be activated in test binaries
/// - Otherwise, only when the `test` feature is enabled
///
/// # Panics
///
/// Panics if writing to `stderr` fails for any reason **except** broken pipe.
///
/// Writing to non-blocking stdout can cause an error, which will lead
/// this macro to panic.
///
/// # Examples
///
/// ```
/// # #[cfg(feature = "auto")] {
/// use anstream::eprint;
///
/// eprint!("Error: Could not complete task");
/// # }
/// ```
#[cfg(feature = "auto")]
#[macro_export]
macro_rules! eprint {
($($arg:tt)*) => {{
if cfg!(any(feature = "test", test)) {
use std::io::Write as _;
let stdio = std::io::stderr();
let choice = $crate::AutoStream::choice(&stdio);
let buffer = Vec::new();
let mut stream = $crate::AutoStream::new(buffer, choice);
// Ignore errors rather than panic
let _ = ::std::write!(&mut stream, $($arg)*);
let buffer = stream.into_inner();
// Should be UTF-8 but not wanting to panic
let buffer = String::from_utf8_lossy(&buffer);
::std::eprint!("{}", buffer)
} else {
use std::io::Write as _;
let mut stream = $crate::stderr();
match ::std::write!(&mut stream, $($arg)*) {
Err(e) if e.kind() != ::std::io::ErrorKind::BrokenPipe => {
::std::panic!("failed printing to stdout: {e}");
}
Err(_) | Ok(_) => {}
}
}
}};
}
/// Prints to [`stderr`][crate::stderr], with a newline.
///
/// Equivalent to the [`println!`] macro, except that output goes to
/// `stderr` instead of `stdout`. See [`println!`] for
/// example usage.
///
/// Use `eprintln!` only for error and progress messages. Use `println!`
/// instead for the primary output of your program.
///
/// **NOTE:** Not all `eprintln!` calls will be captured in tests like [`std::eprintln!`]
/// - Capturing will automatically be activated in test binaries
/// - Otherwise, only when the `test` feature is enabled
///
/// # Panics
///
/// Panics if writing to `stderr` fails for any reason **except** broken pipe.
///
/// Writing to non-blocking stdout can cause an error, which will lead
/// this macro to panic.
///
/// # Examples
///
/// ```
/// # #[cfg(feature = "auto")] {
/// use anstream::eprintln;
///
/// eprintln!("Error: Could not complete task");
/// # }
/// ```
#[cfg(feature = "auto")]
#[macro_export]
macro_rules! eprintln {
() => {
$crate::eprint!("\n")
};
($($arg:tt)*) => {{
if cfg!(any(feature = "test", test)) {
use std::io::Write as _;
let stdio = std::io::stderr();
let choice = $crate::AutoStream::choice(&stdio);
let buffer = Vec::new();
let mut stream = $crate::AutoStream::new(buffer, choice);
// Ignore errors rather than panic
let _ = ::std::write!(&mut stream, $($arg)*);
let buffer = stream.into_inner();
// Should be UTF-8 but not wanting to panic
let buffer = String::from_utf8_lossy(&buffer);
::std::eprintln!("{}", buffer)
} else {
use std::io::Write as _;
let mut stream = $crate::stderr();
match ::std::writeln!(&mut stream, $($arg)*) {
Err(e) if e.kind() != ::std::io::ErrorKind::BrokenPipe => {
::std::panic!("failed printing to stdout: {e}");
}
Err(_) | Ok(_) => {}
}
}
}};
}
/// Panics the current thread.
///
/// This allows a program to terminate immediately and provide feedback
/// to the caller of the program.
///
/// This macro is the perfect way to assert conditions in example code and in
/// tests. `panic!` is closely tied with the `unwrap` method of both
/// [`Option`][ounwrap] and [`Result`][runwrap] enums. Both implementations call
/// `panic!` when they are set to [`None`] or [`Err`] variants.
///
/// When using `panic!()` you can specify a string payload, that is built using
/// the [`format!`] syntax. That payload is used when injecting the panic into
/// the calling Rust thread, causing the thread to panic entirely.
///
/// The behavior of the default `std` hook, i.e. the code that runs directly
/// after the panic is invoked, is to print the message payload to
/// `stderr` along with the file/line/column information of the `panic!()`
/// call. You can override the panic hook using [`std::panic::set_hook()`].
/// Inside the hook a panic can be accessed as a `&dyn Any + Send`,
/// which contains either a `&str` or `String` for regular `panic!()` invocations.
/// To panic with a value of another other type, [`panic_any`] can be used.
///
/// See also the macro [`compile_error!`], for raising errors during compilation.
///
/// # When to use `panic!` vs `Result`
///
/// The Rust language provides two complementary systems for constructing /
/// representing, reporting, propagating, reacting to, and discarding errors. These
/// responsibilities are collectively known as "error handling." `panic!` and
/// `Result` are similar in that they are each the primary interface of their
/// respective error handling systems; however, the meaning these interfaces attach
/// to their errors and the responsibilities they fulfill within their respective
/// error handling systems differ.
///
/// The `panic!` macro is used to construct errors that represent a bug that has
/// been detected in your program. With `panic!` you provide a message that
/// describes the bug and the language then constructs an error with that message,
/// reports it, and propagates it for you.
///
/// `Result` on the other hand is used to wrap other types that represent either
/// the successful result of some computation, `Ok(T)`, or error types that
/// represent an anticipated runtime failure mode of that computation, `Err(E)`.
/// `Result` is used alongside user defined types which represent the various
/// anticipated runtime failure modes that the associated computation could
/// encounter. `Result` must be propagated manually, often with the the help of the
/// `?` operator and `Try` trait, and they must be reported manually, often with
/// the help of the `Error` trait.
///
/// For more detailed information about error handling check out the [book] or the
/// [`std::result`] module docs.
///
/// [ounwrap]: Option::unwrap
/// [runwrap]: Result::unwrap
/// [`std::panic::set_hook()`]: ../std/panic/fn.set_hook.html
/// [`panic_any`]: ../std/panic/fn.panic_any.html
/// [`Box`]: ../std/boxed/struct.Box.html
/// [`Any`]: crate::any::Any
/// [`format!`]: ../std/macro.format.html
/// [book]: ../book/ch09-00-error-handling.html
/// [`std::result`]: ../std/result/index.html
///
/// # Current implementation
///
/// If the main thread panics it will terminate all your threads and end your
/// program with code `101`.
///
/// # Examples
///
/// ```should_panic
/// # #![allow(unreachable_code)]
/// use anstream::panic;
/// panic!();
/// panic!("this is a terrible mistake!");
/// panic!("this is a {} {message}", "fancy", message = "message");
/// ```
#[cfg(feature = "auto")]
#[macro_export]
macro_rules! panic {
() => {
::std::panic!()
};
($($arg:tt)*) => {{
use std::io::Write as _;
let panic_stream = std::io::stderr();
let choice = $crate::AutoStream::choice(&panic_stream);
let buffer = Vec::new();
let mut stream = $crate::AutoStream::new(buffer, choice);
// Ignore errors rather than panic
let _ = ::std::write!(&mut stream, $($arg)*);
let buffer = stream.into_inner();
// Should be UTF-8 but not wanting to panic
let buffer = String::from_utf8_lossy(&buffer).into_owned();
::std::panic!("{}", buffer)
}};
}

View File

@@ -1,261 +0,0 @@
//! Higher-level traits to describe writeable streams
/// Required functionality for underlying [`std::io::Write`] for adaptation
#[cfg(not(all(windows, feature = "wincon")))]
pub trait RawStream: std::io::Write + IsTerminal + private::Sealed {}
/// Required functionality for underlying [`std::io::Write`] for adaptation
#[cfg(all(windows, feature = "wincon"))]
pub trait RawStream:
std::io::Write + IsTerminal + anstyle_wincon::WinconStream + private::Sealed
{
}
impl RawStream for std::io::Stdout {}
impl RawStream for std::io::StdoutLock<'_> {}
impl RawStream for &'_ mut std::io::StdoutLock<'_> {}
impl RawStream for std::io::Stderr {}
impl RawStream for std::io::StderrLock<'_> {}
impl RawStream for &'_ mut std::io::StderrLock<'_> {}
impl RawStream for Box<dyn std::io::Write> {}
impl RawStream for &'_ mut Box<dyn std::io::Write> {}
impl RawStream for Vec<u8> {}
impl RawStream for &'_ mut Vec<u8> {}
impl RawStream for std::fs::File {}
impl RawStream for &'_ mut std::fs::File {}
#[allow(deprecated)]
impl RawStream for crate::Buffer {}
#[allow(deprecated)]
impl RawStream for &'_ mut crate::Buffer {}
pub trait IsTerminal: private::Sealed {
fn is_terminal(&self) -> bool;
}
impl IsTerminal for std::io::Stdout {
#[inline]
fn is_terminal(&self) -> bool {
std::io::IsTerminal::is_terminal(self)
}
}
impl IsTerminal for std::io::StdoutLock<'_> {
#[inline]
fn is_terminal(&self) -> bool {
std::io::IsTerminal::is_terminal(self)
}
}
impl IsTerminal for &'_ mut std::io::StdoutLock<'_> {
#[inline]
fn is_terminal(&self) -> bool {
(**self).is_terminal()
}
}
impl IsTerminal for std::io::Stderr {
#[inline]
fn is_terminal(&self) -> bool {
std::io::IsTerminal::is_terminal(self)
}
}
impl IsTerminal for std::io::StderrLock<'_> {
#[inline]
fn is_terminal(&self) -> bool {
std::io::IsTerminal::is_terminal(self)
}
}
impl IsTerminal for &'_ mut std::io::StderrLock<'_> {
#[inline]
fn is_terminal(&self) -> bool {
(**self).is_terminal()
}
}
impl IsTerminal for Box<dyn std::io::Write> {
#[inline]
fn is_terminal(&self) -> bool {
false
}
}
impl IsTerminal for &'_ mut Box<dyn std::io::Write> {
#[inline]
fn is_terminal(&self) -> bool {
false
}
}
impl IsTerminal for Vec<u8> {
#[inline]
fn is_terminal(&self) -> bool {
false
}
}
impl IsTerminal for &'_ mut Vec<u8> {
#[inline]
fn is_terminal(&self) -> bool {
false
}
}
impl IsTerminal for std::fs::File {
#[inline]
fn is_terminal(&self) -> bool {
std::io::IsTerminal::is_terminal(self)
}
}
impl IsTerminal for &'_ mut std::fs::File {
#[inline]
fn is_terminal(&self) -> bool {
(**self).is_terminal()
}
}
#[allow(deprecated)]
impl IsTerminal for crate::Buffer {
#[inline]
fn is_terminal(&self) -> bool {
false
}
}
#[allow(deprecated)]
impl IsTerminal for &'_ mut crate::Buffer {
#[inline]
fn is_terminal(&self) -> bool {
(**self).is_terminal()
}
}
pub trait AsLockedWrite: private::Sealed {
type Write<'w>: RawStream + 'w
where
Self: 'w;
fn as_locked_write(&mut self) -> Self::Write<'_>;
}
impl AsLockedWrite for std::io::Stdout {
type Write<'w> = std::io::StdoutLock<'w>;
#[inline]
fn as_locked_write(&mut self) -> Self::Write<'_> {
self.lock()
}
}
impl AsLockedWrite for std::io::StdoutLock<'static> {
type Write<'w> = &'w mut Self;
#[inline]
fn as_locked_write(&mut self) -> Self::Write<'_> {
self
}
}
impl AsLockedWrite for std::io::Stderr {
type Write<'w> = std::io::StderrLock<'w>;
#[inline]
fn as_locked_write(&mut self) -> Self::Write<'_> {
self.lock()
}
}
impl AsLockedWrite for std::io::StderrLock<'static> {
type Write<'w> = &'w mut Self;
#[inline]
fn as_locked_write(&mut self) -> Self::Write<'_> {
self
}
}
impl AsLockedWrite for Box<dyn std::io::Write> {
type Write<'w> = &'w mut Self;
#[inline]
fn as_locked_write(&mut self) -> Self::Write<'_> {
self
}
}
impl AsLockedWrite for Vec<u8> {
type Write<'w> = &'w mut Self;
#[inline]
fn as_locked_write(&mut self) -> Self::Write<'_> {
self
}
}
impl AsLockedWrite for std::fs::File {
type Write<'w> = &'w mut Self;
#[inline]
fn as_locked_write(&mut self) -> Self::Write<'_> {
self
}
}
#[allow(deprecated)]
impl AsLockedWrite for crate::Buffer {
type Write<'w> = &'w mut Self;
#[inline]
fn as_locked_write(&mut self) -> Self::Write<'_> {
self
}
}
mod private {
pub trait Sealed {}
impl Sealed for std::io::Stdout {}
impl Sealed for std::io::StdoutLock<'_> {}
impl Sealed for &'_ mut std::io::StdoutLock<'_> {}
impl Sealed for std::io::Stderr {}
impl Sealed for std::io::StderrLock<'_> {}
impl Sealed for &'_ mut std::io::StderrLock<'_> {}
impl Sealed for Box<dyn std::io::Write> {}
impl Sealed for &'_ mut Box<dyn std::io::Write> {}
impl Sealed for Vec<u8> {}
impl Sealed for &'_ mut Vec<u8> {}
impl Sealed for std::fs::File {}
impl Sealed for &'_ mut std::fs::File {}
#[allow(deprecated)]
impl Sealed for crate::Buffer {}
#[allow(deprecated)]
impl Sealed for &'_ mut crate::Buffer {}
}

View File

@@ -1,219 +0,0 @@
use crate::adapter::StripBytes;
use crate::stream::AsLockedWrite;
use crate::stream::RawStream;
/// Only pass printable data to the inner `Write`
#[derive(Debug)]
pub struct StripStream<S>
where
S: RawStream,
{
raw: S,
state: StripBytes,
}
impl<S> StripStream<S>
where
S: RawStream,
{
/// Only pass printable data to the inner `Write`
#[inline]
pub fn new(raw: S) -> Self {
Self {
raw,
state: Default::default(),
}
}
/// Get the wrapped [`RawStream`]
#[inline]
pub fn into_inner(self) -> S {
self.raw
}
#[inline]
pub fn is_terminal(&self) -> bool {
self.raw.is_terminal()
}
}
impl StripStream<std::io::Stdout> {
/// Get exclusive access to the `StripStream`
///
/// Why?
/// - Faster performance when writing in a loop
/// - Avoid other threads interleaving output with the current thread
#[inline]
pub fn lock(self) -> StripStream<std::io::StdoutLock<'static>> {
StripStream {
raw: self.raw.lock(),
state: self.state,
}
}
}
impl StripStream<std::io::Stderr> {
/// Get exclusive access to the `StripStream`
///
/// Why?
/// - Faster performance when writing in a loop
/// - Avoid other threads interleaving output with the current thread
#[inline]
pub fn lock(self) -> StripStream<std::io::StderrLock<'static>> {
StripStream {
raw: self.raw.lock(),
state: self.state,
}
}
}
impl<S> std::io::Write for StripStream<S>
where
S: RawStream + AsLockedWrite,
{
// Must forward all calls to ensure locking happens appropriately
#[inline]
fn write(&mut self, buf: &[u8]) -> std::io::Result<usize> {
write(&mut self.raw.as_locked_write(), &mut self.state, buf)
}
#[inline]
fn write_vectored(&mut self, bufs: &[std::io::IoSlice<'_>]) -> std::io::Result<usize> {
let buf = bufs
.iter()
.find(|b| !b.is_empty())
.map(|b| &**b)
.unwrap_or(&[][..]);
self.write(buf)
}
// is_write_vectored: nightly only
#[inline]
fn flush(&mut self) -> std::io::Result<()> {
self.raw.as_locked_write().flush()
}
#[inline]
fn write_all(&mut self, buf: &[u8]) -> std::io::Result<()> {
write_all(&mut self.raw.as_locked_write(), &mut self.state, buf)
}
// write_all_vectored: nightly only
#[inline]
fn write_fmt(&mut self, args: std::fmt::Arguments<'_>) -> std::io::Result<()> {
write_fmt(&mut self.raw.as_locked_write(), &mut self.state, args)
}
}
fn write(
raw: &mut dyn std::io::Write,
state: &mut StripBytes,
buf: &[u8],
) -> std::io::Result<usize> {
let initial_state = state.clone();
for printable in state.strip_next(buf) {
let possible = printable.len();
let written = raw.write(printable)?;
if possible != written {
let divergence = &printable[written..];
let offset = offset_to(buf, divergence);
let consumed = &buf[offset..];
*state = initial_state;
state.strip_next(consumed).last();
return Ok(offset);
}
}
Ok(buf.len())
}
fn write_all(
raw: &mut dyn std::io::Write,
state: &mut StripBytes,
buf: &[u8],
) -> std::io::Result<()> {
for printable in state.strip_next(buf) {
raw.write_all(printable)?;
}
Ok(())
}
fn write_fmt(
raw: &mut dyn std::io::Write,
state: &mut StripBytes,
args: std::fmt::Arguments<'_>,
) -> std::io::Result<()> {
let write_all = |buf: &[u8]| write_all(raw, state, buf);
crate::fmt::Adapter::new(write_all).write_fmt(args)
}
#[inline]
fn offset_to(total: &[u8], subslice: &[u8]) -> usize {
let total = total.as_ptr();
let subslice = subslice.as_ptr();
debug_assert!(
total <= subslice,
"`Offset::offset_to` only accepts slices of `self`"
);
subslice as usize - total as usize
}
#[cfg(test)]
mod test {
use super::*;
use proptest::prelude::*;
use std::io::Write as _;
proptest! {
#[test]
#[cfg_attr(miri, ignore)] // See https://github.com/AltSysrq/proptest/issues/253
fn write_all_no_escapes(s in "\\PC*") {
let buffer = Vec::new();
let mut stream = StripStream::new(buffer);
stream.write_all(s.as_bytes()).unwrap();
let buffer = stream.into_inner();
let actual = std::str::from_utf8(buffer.as_ref()).unwrap();
assert_eq!(s, actual);
}
#[test]
#[cfg_attr(miri, ignore)] // See https://github.com/AltSysrq/proptest/issues/253
fn write_byte_no_escapes(s in "\\PC*") {
let buffer = Vec::new();
let mut stream = StripStream::new(buffer);
for byte in s.as_bytes() {
stream.write_all(&[*byte]).unwrap();
}
let buffer = stream.into_inner();
let actual = std::str::from_utf8(buffer.as_ref()).unwrap();
assert_eq!(s, actual);
}
#[test]
#[cfg_attr(miri, ignore)] // See https://github.com/AltSysrq/proptest/issues/253
fn write_all_random(s in any::<Vec<u8>>()) {
let buffer = Vec::new();
let mut stream = StripStream::new(buffer);
stream.write_all(s.as_slice()).unwrap();
let buffer = stream.into_inner();
if let Ok(actual) = std::str::from_utf8(buffer.as_ref()) {
for char in actual.chars() {
assert!(!char.is_ascii() || !char.is_control() || char.is_ascii_whitespace(), "{:?} -> {:?}: {:?}", String::from_utf8_lossy(&s), actual, char);
}
}
}
#[test]
#[cfg_attr(miri, ignore)] // See https://github.com/AltSysrq/proptest/issues/253
fn write_byte_random(s in any::<Vec<u8>>()) {
let buffer = Vec::new();
let mut stream = StripStream::new(buffer);
for byte in s.as_slice() {
stream.write_all(&[*byte]).unwrap();
}
let buffer = stream.into_inner();
if let Ok(actual) = std::str::from_utf8(buffer.as_ref()) {
for char in actual.chars() {
assert!(!char.is_ascii() || !char.is_control() || char.is_ascii_whitespace(), "{:?} -> {:?}: {:?}", String::from_utf8_lossy(&s), actual, char);
}
}
}
}
}

View File

@@ -1,210 +0,0 @@
use crate::adapter::WinconBytes;
use crate::stream::AsLockedWrite;
use crate::stream::RawStream;
/// Only pass printable data to the inner `Write`
#[cfg(feature = "wincon")] // here mostly for documentation purposes
#[derive(Debug)]
pub struct WinconStream<S>
where
S: RawStream,
{
raw: S,
// `WinconBytes` is especially large compared to other variants of `AutoStream`, so boxing it
// here so `AutoStream` doesn't have to discard one allocation and create another one when
// calling `AutoStream::lock`
state: Box<WinconBytes>,
}
impl<S> WinconStream<S>
where
S: RawStream,
{
/// Only pass printable data to the inner `Write`
#[inline]
pub fn new(raw: S) -> Self {
Self {
raw,
state: Default::default(),
}
}
/// Get the wrapped [`RawStream`]
#[inline]
pub fn into_inner(self) -> S {
self.raw
}
#[inline]
pub fn is_terminal(&self) -> bool {
self.raw.is_terminal()
}
}
impl WinconStream<std::io::Stdout> {
/// Get exclusive access to the `WinconStream`
///
/// Why?
/// - Faster performance when writing in a loop
/// - Avoid other threads interleaving output with the current thread
#[inline]
pub fn lock(self) -> WinconStream<std::io::StdoutLock<'static>> {
WinconStream {
raw: self.raw.lock(),
state: self.state,
}
}
}
impl WinconStream<std::io::Stderr> {
/// Get exclusive access to the `WinconStream`
///
/// Why?
/// - Faster performance when writing in a loop
/// - Avoid other threads interleaving output with the current thread
#[inline]
pub fn lock(self) -> WinconStream<std::io::StderrLock<'static>> {
WinconStream {
raw: self.raw.lock(),
state: self.state,
}
}
}
impl<S> std::io::Write for WinconStream<S>
where
S: RawStream + AsLockedWrite,
{
// Must forward all calls to ensure locking happens appropriately
#[inline]
fn write(&mut self, buf: &[u8]) -> std::io::Result<usize> {
write(&mut self.raw.as_locked_write(), &mut self.state, buf)
}
#[inline]
fn write_vectored(&mut self, bufs: &[std::io::IoSlice<'_>]) -> std::io::Result<usize> {
let buf = bufs
.iter()
.find(|b| !b.is_empty())
.map(|b| &**b)
.unwrap_or(&[][..]);
self.write(buf)
}
// is_write_vectored: nightly only
#[inline]
fn flush(&mut self) -> std::io::Result<()> {
self.raw.as_locked_write().flush()
}
#[inline]
fn write_all(&mut self, buf: &[u8]) -> std::io::Result<()> {
write_all(&mut self.raw.as_locked_write(), &mut self.state, buf)
}
// write_all_vectored: nightly only
#[inline]
fn write_fmt(&mut self, args: std::fmt::Arguments<'_>) -> std::io::Result<()> {
write_fmt(&mut self.raw.as_locked_write(), &mut self.state, args)
}
}
fn write(raw: &mut dyn RawStream, state: &mut WinconBytes, buf: &[u8]) -> std::io::Result<usize> {
for (style, printable) in state.extract_next(buf) {
let fg = style.get_fg_color().and_then(cap_wincon_color);
let bg = style.get_bg_color().and_then(cap_wincon_color);
let written = raw.write_colored(fg, bg, printable.as_bytes())?;
let possible = printable.len();
if possible != written {
// HACK: Unsupported atm
break;
}
}
Ok(buf.len())
}
fn write_all(raw: &mut dyn RawStream, state: &mut WinconBytes, buf: &[u8]) -> std::io::Result<()> {
for (style, printable) in state.extract_next(buf) {
let mut buf = printable.as_bytes();
let fg = style.get_fg_color().and_then(cap_wincon_color);
let bg = style.get_bg_color().and_then(cap_wincon_color);
while !buf.is_empty() {
match raw.write_colored(fg, bg, buf) {
Ok(0) => {
return Err(std::io::Error::new(
std::io::ErrorKind::WriteZero,
"failed to write whole buffer",
));
}
Ok(n) => buf = &buf[n..],
Err(ref e) if e.kind() == std::io::ErrorKind::Interrupted => {}
Err(e) => return Err(e),
}
}
}
Ok(())
}
fn write_fmt(
raw: &mut dyn RawStream,
state: &mut WinconBytes,
args: std::fmt::Arguments<'_>,
) -> std::io::Result<()> {
let write_all = |buf: &[u8]| write_all(raw, state, buf);
crate::fmt::Adapter::new(write_all).write_fmt(args)
}
fn cap_wincon_color(color: anstyle::Color) -> Option<anstyle::AnsiColor> {
match color {
anstyle::Color::Ansi(c) => Some(c),
anstyle::Color::Ansi256(c) => c.into_ansi(),
anstyle::Color::Rgb(_) => None,
}
}
#[cfg(test)]
mod test {
use super::*;
use proptest::prelude::*;
use std::io::Write as _;
proptest! {
#[test]
#[cfg_attr(miri, ignore)] // See https://github.com/AltSysrq/proptest/issues/253
fn write_all_no_escapes(s in "\\PC*") {
let buffer = Vec::new();
let mut stream = WinconStream::new(buffer);
stream.write_all(s.as_bytes()).unwrap();
let buffer = stream.into_inner();
let actual = std::str::from_utf8(buffer.as_ref()).unwrap();
assert_eq!(s, actual);
}
#[test]
#[cfg_attr(miri, ignore)] // See https://github.com/AltSysrq/proptest/issues/253
fn write_byte_no_escapes(s in "\\PC*") {
let buffer = Vec::new();
let mut stream = WinconStream::new(buffer);
for byte in s.as_bytes() {
stream.write_all(&[*byte]).unwrap();
}
let buffer = stream.into_inner();
let actual = std::str::from_utf8(buffer.as_ref()).unwrap();
assert_eq!(s, actual);
}
#[test]
#[cfg_attr(miri, ignore)] // See https://github.com/AltSysrq/proptest/issues/253
fn write_all_random(s in any::<Vec<u8>>()) {
let buffer = Vec::new();
let mut stream = WinconStream::new(buffer);
stream.write_all(s.as_slice()).unwrap();
}
#[test]
#[cfg_attr(miri, ignore)] // See https://github.com/AltSysrq/proptest/issues/253
fn write_byte_random(s in any::<Vec<u8>>()) {
let buffer = Vec::new();
let mut stream = WinconStream::new(buffer);
for byte in s.as_slice() {
stream.write_all(&[*byte]).unwrap();
}
}
}
}

View File

@@ -1 +0,0 @@
{"files":{"Cargo.lock":"7f68b5328c460caf1d2198b10fe1761e5f0282262f92d04076b30b25539970b0","Cargo.toml":"2834f39b7169c03b03da1e209f56133783ce00ea64d5f2c14381d93984ca20bf","LICENSE-APACHE":"b40930bbcf80744c86c46a12bc9da056641d722716c378f5659b9e555ef833e1","LICENSE-MIT":"c1d4bc00896473e0109ccb4c3c7d21addb55a4ff1a644be204dcfce26612af2a","README.md":"abc82171d436ee0eb221838e8d21a21a2e392504e87f0c130b5eca6a35671e1e","benches/parse.rs":"336c808d51c90db2497fa87e571df7f71c844a1b09be88839fe4255066c632f4","examples/parselog.rs":"58b7db739deed701aa0ab386d0d0c1772511b8aed1c08d31ec5b35a1c8cd4321","src/lib.rs":"c89f2afa0e982276dc47ca8d8a76d47516aa39aa9d3354254c87fdbf2f8ef4cc","src/params.rs":"8cfef4e2ab1961ca2d9f210da553fc6ac64bb6dbd03321f0ee7d6089ab45389c","src/state/codegen.rs":"8530124c8f998f391e47950f130590376321dcade810990f4312c3b1c0a61968","src/state/definitions.rs":"dc3dbb3244def74430a72b0108f019e22cc02e0ae5f563ee14d38300ff82b814","src/state/mod.rs":"be07c2ea393a971dd54117dc2ce8a3ffb5b803cb557ab468389b74570855fa37","src/state/table.rs":"673b7e9242c5248efc076086cc6923578ec2f059c0c26da21363528e20e4285c"},"package":"c75ac65da39e5fe5ab759307499ddad880d724eed2f6ce5b5e8a26f4f387928c"}

1202
vendor/anstyle-parse/Cargo.lock generated vendored

File diff suppressed because it is too large Load Diff

View File

@@ -1,108 +0,0 @@
# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO
#
# When uploading crates to the registry Cargo will automatically
# "normalize" Cargo.toml files for maximal compatibility
# with all versions of Cargo and also rewrite `path` dependencies
# to registry (e.g., crates.io) dependencies.
#
# If you are reading this file be aware that the original Cargo.toml
# will likely look very different (and much more reasonable).
# See Cargo.toml.orig for the original contents.
[package]
edition = "2021"
rust-version = "1.70.0"
name = "anstyle-parse"
version = "0.2.3"
include = [
"build.rs",
"src/**/*",
"Cargo.toml",
"Cargo.lock",
"LICENSE*",
"README.md",
"benches/**/*",
"examples/**/*",
]
description = "Parse ANSI Style Escapes"
homepage = "https://github.com/rust-cli/anstyle"
readme = "README.md"
keywords = [
"ansi",
"terminal",
"color",
"vte",
]
categories = ["command-line-interface"]
license = "MIT OR Apache-2.0"
repository = "https://github.com/rust-cli/anstyle.git"
[[package.metadata.release.pre-release-replacements]]
file = "CHANGELOG.md"
min = 1
replace = "{{version}}"
search = "Unreleased"
[[package.metadata.release.pre-release-replacements]]
exactly = 1
file = "CHANGELOG.md"
replace = "...{{tag_name}}"
search = '\.\.\.HEAD'
[[package.metadata.release.pre-release-replacements]]
file = "CHANGELOG.md"
min = 1
replace = "{{date}}"
search = "ReleaseDate"
[[package.metadata.release.pre-release-replacements]]
exactly = 1
file = "CHANGELOG.md"
replace = """
<!-- next-header -->
## [Unreleased] - ReleaseDate
"""
search = "<!-- next-header -->"
[[package.metadata.release.pre-release-replacements]]
exactly = 1
file = "CHANGELOG.md"
replace = """
<!-- next-url -->
[Unreleased]: https://github.com/rust-cli/anstyle/compare/{{tag_name}}...HEAD"""
search = "<!-- next-url -->"
[[bench]]
name = "parse"
harness = false
[dependencies.arrayvec]
version = "0.7.2"
optional = true
default-features = false
[dependencies.utf8parse]
version = "0.2.1"
optional = true
[dev-dependencies.codegenrs]
version = "3.0.1"
default-features = false
[dev-dependencies.criterion]
version = "0.5.1"
[dev-dependencies.proptest]
version = "1.4.0"
[dev-dependencies.snapbox]
version = "0.4.14"
features = ["path"]
[dev-dependencies.vte_generate_state_changes]
version = "0.1.1"
[features]
core = ["dep:arrayvec"]
default = ["utf8"]
utf8 = ["dep:utf8parse"]

View File

@@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,25 +0,0 @@
Copyright (c) 2016 Joe Wilm and individual contributors
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.

View File

@@ -1,33 +0,0 @@
# anstyle-parse
> Parse [Parse ANSI Style Escapes](https://vt100.net/emu/dec_ansi_parser)
[![Documentation](https://img.shields.io/badge/docs-master-blue.svg)][Documentation]
![License](https://img.shields.io/crates/l/anstyle-parse.svg)
[![Crates Status](https://img.shields.io/crates/v/anstyle-parse.svg)](https://crates.io/crates/anstyle-parse)
## License
Licensed under either of
* Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)
* MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)
at your option.
### Contribution
Unless you explicitly state otherwise, any contribution intentionally
submitted for inclusion in the work by you, as defined in the Apache-2.0
license, shall be dual licensed as above, without any additional terms or
conditions.
### Special Thanks
[chrisduerr](https://github.com/alacritty/vte/commits?author=chrisduerr) and the
[alacritty project](https://github.com/alacritty/alacritty) for
[vte](https://crates.io/crates/vte) which
[this was forked from](https://github.com/alacritty/vte/issues/82)
[Crates.io]: https://crates.io/crates/anstyle-parse
[Documentation]: https://docs.rs/anstyle-parse

View File

@@ -1,169 +0,0 @@
use criterion::{black_box, Criterion};
use anstyle_parse::*;
struct BenchDispatcher;
impl Perform for BenchDispatcher {
fn print(&mut self, c: char) {
black_box(c);
}
fn execute(&mut self, byte: u8) {
black_box(byte);
}
fn hook(&mut self, params: &Params, intermediates: &[u8], ignore: bool, c: u8) {
black_box((params, intermediates, ignore, c));
}
fn put(&mut self, byte: u8) {
black_box(byte);
}
fn osc_dispatch(&mut self, params: &[&[u8]], bell_terminated: bool) {
black_box((params, bell_terminated));
}
fn csi_dispatch(&mut self, params: &Params, intermediates: &[u8], ignore: bool, c: u8) {
black_box((params, intermediates, ignore, c));
}
fn esc_dispatch(&mut self, intermediates: &[u8], ignore: bool, byte: u8) {
black_box((intermediates, ignore, byte));
}
}
#[derive(Default)]
struct Strip(String);
impl Strip {
fn with_capacity(capacity: usize) -> Self {
Self(String::with_capacity(capacity))
}
}
impl Perform for Strip {
fn print(&mut self, c: char) {
self.0.push(c);
}
fn execute(&mut self, byte: u8) {
if byte.is_ascii_whitespace() {
self.0.push(byte as char);
}
}
}
fn strip_str(content: &str) -> String {
use anstyle_parse::state::state_change;
use anstyle_parse::state::Action;
use anstyle_parse::state::State;
#[inline]
fn is_utf8_continuation(b: u8) -> bool {
matches!(b, 0x80..=0xbf)
}
#[inline]
fn is_printable(action: Action, byte: u8) -> bool {
action == Action::Print
|| action == Action::BeginUtf8
// since we know the input is valid UTF-8, the only thing we can do with
// continuations is to print them
|| is_utf8_continuation(byte)
|| (action == Action::Execute && byte.is_ascii_whitespace())
}
let mut stripped = Vec::with_capacity(content.len());
let mut bytes = content.as_bytes();
while !bytes.is_empty() {
let offset = bytes.iter().copied().position(|b| {
let (_next_state, action) = state_change(State::Ground, b);
!is_printable(action, b)
});
let (printable, next) = bytes.split_at(offset.unwrap_or(bytes.len()));
stripped.extend(printable);
bytes = next;
let mut state = State::Ground;
let offset = bytes.iter().copied().position(|b| {
let (next_state, action) = state_change(state, b);
if next_state != State::Anywhere {
state = next_state;
}
is_printable(action, b)
});
let (_, next) = bytes.split_at(offset.unwrap_or(bytes.len()));
bytes = next;
}
String::from_utf8(stripped).unwrap()
}
fn parse(c: &mut Criterion) {
for (name, content) in [
#[cfg(feature = "utf8")]
("demo.vte", &include_bytes!("../tests/demo.vte")[..]),
("rg_help.vte", &include_bytes!("../tests/rg_help.vte")[..]),
("rg_linus.vte", &include_bytes!("../tests/rg_linus.vte")[..]),
(
"state_changes",
&b"\x1b]2;X\x1b\\ \x1b[0m \x1bP0@\x1b\\"[..],
),
] {
// Make sure the comparison is fair
if let Ok(content) = std::str::from_utf8(content) {
let mut stripped = Strip::with_capacity(content.len());
let mut parser = Parser::<DefaultCharAccumulator>::new();
for byte in content.as_bytes() {
parser.advance(&mut stripped, *byte);
}
assert_eq!(stripped.0, strip_str(content));
}
let mut group = c.benchmark_group(name);
group.bench_function("advance", |b| {
b.iter(|| {
let mut dispatcher = BenchDispatcher;
let mut parser = Parser::<DefaultCharAccumulator>::new();
for byte in content {
parser.advance(&mut dispatcher, *byte);
}
})
});
group.bench_function("advance_strip", |b| {
b.iter(|| {
let mut stripped = Strip::with_capacity(content.len());
let mut parser = Parser::<DefaultCharAccumulator>::new();
for byte in content {
parser.advance(&mut stripped, *byte);
}
black_box(stripped.0)
})
});
group.bench_function("state_change", |b| {
b.iter(|| {
let mut state = anstyle_parse::state::State::default();
for byte in content {
let (next_state, action) = anstyle_parse::state::state_change(state, *byte);
state = next_state;
black_box(action);
}
})
});
if let Ok(content) = std::str::from_utf8(content) {
group.bench_function("state_change_strip_str", |b| {
b.iter(|| {
let stripped = strip_str(content);
black_box(stripped)
})
});
}
}
}
criterion::criterion_group!(benches, parse);
criterion::criterion_main!(benches);

View File

@@ -1,78 +0,0 @@
//! Parse input from stdin and log actions on stdout
use std::io::{self, Read};
use anstyle_parse::{DefaultCharAccumulator, Params, Parser, Perform};
/// A type implementing Perform that just logs actions
struct Log;
impl Perform for Log {
fn print(&mut self, c: char) {
println!("[print] {:?}", c);
}
fn execute(&mut self, byte: u8) {
println!("[execute] {:02x}", byte);
}
fn hook(&mut self, params: &Params, intermediates: &[u8], ignore: bool, c: u8) {
println!(
"[hook] params={:?}, intermediates={:?}, ignore={:?}, char={:?}",
params, intermediates, ignore, c
);
}
fn put(&mut self, byte: u8) {
println!("[put] {:02x}", byte);
}
fn unhook(&mut self) {
println!("[unhook]");
}
fn osc_dispatch(&mut self, params: &[&[u8]], bell_terminated: bool) {
println!(
"[osc_dispatch] params={:?} bell_terminated={}",
params, bell_terminated
);
}
fn csi_dispatch(&mut self, params: &Params, intermediates: &[u8], ignore: bool, c: u8) {
println!(
"[csi_dispatch] params={:#?}, intermediates={:?}, ignore={:?}, char={:?}",
params, intermediates, ignore, c
);
}
fn esc_dispatch(&mut self, intermediates: &[u8], ignore: bool, byte: u8) {
println!(
"[esc_dispatch] intermediates={:?}, ignore={:?}, byte={:02x}",
intermediates, ignore, byte
);
}
}
fn main() {
let input = io::stdin();
let mut handle = input.lock();
let mut statemachine = Parser::<DefaultCharAccumulator>::new();
let mut performer = Log;
let mut buf = [0; 2048];
loop {
match handle.read(&mut buf) {
Ok(0) => break,
Ok(n) => {
for byte in &buf[..n] {
statemachine.advance(&mut performer, *byte);
}
}
Err(err) => {
println!("err: {}", err);
break;
}
}
}
}

View File

@@ -1,431 +0,0 @@
//! Parser for implementing virtual terminal emulators
//!
//! [`Parser`] is implemented according to [Paul Williams' ANSI parser
//! state machine]. The state machine doesn't assign meaning to the parsed data
//! and is thus not itself sufficient for writing a terminal emulator. Instead,
//! it is expected that an implementation of [`Perform`] is provided which does
//! something useful with the parsed data. The [`Parser`] handles the book
//! keeping, and the [`Perform`] gets to simply handle actions.
//!
//! # Examples
//!
//! For an example of using the [`Parser`] please see the examples folder. The example included
//! there simply logs all the actions [`Perform`] does. One quick thing to see it in action is to
//! pipe `vim` into it
//!
//! ```sh
//! cargo build --release --example parselog
//! vim | target/release/examples/parselog
//! ```
//!
//! Just type `:q` to exit.
//!
//! # Differences from original state machine description
//!
//! * UTF-8 Support for Input
//! * OSC Strings can be terminated by 0x07
//! * Only supports 7-bit codes. Some 8-bit codes are still supported, but they no longer work in
//! all states.
//!
//! [Paul Williams' ANSI parser state machine]: https://vt100.net/emu/dec_ansi_parser
#![cfg_attr(not(test), no_std)]
#[cfg(not(feature = "core"))]
extern crate alloc;
use core::mem::MaybeUninit;
#[cfg(feature = "core")]
use arrayvec::ArrayVec;
#[cfg(feature = "utf8")]
use utf8parse as utf8;
mod params;
pub mod state;
pub use params::{Params, ParamsIter};
use state::{state_change, Action, State};
const MAX_INTERMEDIATES: usize = 2;
const MAX_OSC_PARAMS: usize = 16;
#[cfg(feature = "core")]
const MAX_OSC_RAW: usize = 1024;
/// Parser for raw _VTE_ protocol which delegates actions to a [`Perform`]
#[derive(Default, Clone, Debug, PartialEq, Eq)]
pub struct Parser<C = DefaultCharAccumulator> {
state: State,
intermediates: [u8; MAX_INTERMEDIATES],
intermediate_idx: usize,
params: Params,
param: u16,
#[cfg(feature = "core")]
osc_raw: ArrayVec<u8, MAX_OSC_RAW>,
#[cfg(not(feature = "core"))]
osc_raw: alloc::vec::Vec<u8>,
osc_params: [(usize, usize); MAX_OSC_PARAMS],
osc_num_params: usize,
ignoring: bool,
utf8_parser: C,
}
impl<C> Parser<C>
where
C: CharAccumulator,
{
/// Create a new Parser
pub fn new() -> Parser {
Parser::default()
}
#[inline]
fn params(&self) -> &Params {
&self.params
}
#[inline]
fn intermediates(&self) -> &[u8] {
&self.intermediates[..self.intermediate_idx]
}
/// Advance the parser state
///
/// Requires a [`Perform`] in case `byte` triggers an action
#[inline]
pub fn advance<P: Perform>(&mut self, performer: &mut P, byte: u8) {
// Utf8 characters are handled out-of-band.
if let State::Utf8 = self.state {
self.process_utf8(performer, byte);
return;
}
let (state, action) = state_change(self.state, byte);
self.perform_state_change(performer, state, action, byte);
}
#[inline]
fn process_utf8<P>(&mut self, performer: &mut P, byte: u8)
where
P: Perform,
{
if let Some(c) = self.utf8_parser.add(byte) {
performer.print(c);
self.state = State::Ground;
}
}
#[inline]
fn perform_state_change<P>(&mut self, performer: &mut P, state: State, action: Action, byte: u8)
where
P: Perform,
{
match state {
State::Anywhere => {
// Just run the action
self.perform_action(performer, action, byte);
}
state => {
match self.state {
State::DcsPassthrough => {
self.perform_action(performer, Action::Unhook, byte);
}
State::OscString => {
self.perform_action(performer, Action::OscEnd, byte);
}
_ => (),
}
match action {
Action::Nop => (),
action => {
self.perform_action(performer, action, byte);
}
}
match state {
State::CsiEntry | State::DcsEntry | State::Escape => {
self.perform_action(performer, Action::Clear, byte);
}
State::DcsPassthrough => {
self.perform_action(performer, Action::Hook, byte);
}
State::OscString => {
self.perform_action(performer, Action::OscStart, byte);
}
_ => (),
}
// Assume the new state
self.state = state;
}
}
}
/// Separate method for osc_dispatch that borrows self as read-only
///
/// The aliasing is needed here for multiple slices into self.osc_raw
#[inline]
fn osc_dispatch<P: Perform>(&self, performer: &mut P, byte: u8) {
let mut slices: [MaybeUninit<&[u8]>; MAX_OSC_PARAMS] =
unsafe { MaybeUninit::uninit().assume_init() };
for (i, slice) in slices.iter_mut().enumerate().take(self.osc_num_params) {
let indices = self.osc_params[i];
*slice = MaybeUninit::new(&self.osc_raw[indices.0..indices.1]);
}
unsafe {
let num_params = self.osc_num_params;
let params = &slices[..num_params] as *const [MaybeUninit<&[u8]>] as *const [&[u8]];
performer.osc_dispatch(&*params, byte == 0x07);
}
}
#[inline]
fn perform_action<P: Perform>(&mut self, performer: &mut P, action: Action, byte: u8) {
match action {
Action::Print => performer.print(byte as char),
Action::Execute => performer.execute(byte),
Action::Hook => {
if self.params.is_full() {
self.ignoring = true;
} else {
self.params.push(self.param);
}
performer.hook(self.params(), self.intermediates(), self.ignoring, byte);
}
Action::Put => performer.put(byte),
Action::OscStart => {
self.osc_raw.clear();
self.osc_num_params = 0;
}
Action::OscPut => {
#[cfg(feature = "core")]
{
if self.osc_raw.is_full() {
return;
}
}
let idx = self.osc_raw.len();
// Param separator
if byte == b';' {
let param_idx = self.osc_num_params;
match param_idx {
// Only process up to MAX_OSC_PARAMS
MAX_OSC_PARAMS => return,
// First param is special - 0 to current byte index
0 => {
self.osc_params[param_idx] = (0, idx);
}
// All other params depend on previous indexing
_ => {
let prev = self.osc_params[param_idx - 1];
let begin = prev.1;
self.osc_params[param_idx] = (begin, idx);
}
}
self.osc_num_params += 1;
} else {
self.osc_raw.push(byte);
}
}
Action::OscEnd => {
let param_idx = self.osc_num_params;
let idx = self.osc_raw.len();
match param_idx {
// Finish last parameter if not already maxed
MAX_OSC_PARAMS => (),
// First param is special - 0 to current byte index
0 => {
self.osc_params[param_idx] = (0, idx);
self.osc_num_params += 1;
}
// All other params depend on previous indexing
_ => {
let prev = self.osc_params[param_idx - 1];
let begin = prev.1;
self.osc_params[param_idx] = (begin, idx);
self.osc_num_params += 1;
}
}
self.osc_dispatch(performer, byte);
}
Action::Unhook => performer.unhook(),
Action::CsiDispatch => {
if self.params.is_full() {
self.ignoring = true;
} else {
self.params.push(self.param);
}
performer.csi_dispatch(self.params(), self.intermediates(), self.ignoring, byte);
}
Action::EscDispatch => {
performer.esc_dispatch(self.intermediates(), self.ignoring, byte);
}
Action::Collect => {
if self.intermediate_idx == MAX_INTERMEDIATES {
self.ignoring = true;
} else {
self.intermediates[self.intermediate_idx] = byte;
self.intermediate_idx += 1;
}
}
Action::Param => {
if self.params.is_full() {
self.ignoring = true;
return;
}
if byte == b';' {
self.params.push(self.param);
self.param = 0;
} else if byte == b':' {
self.params.extend(self.param);
self.param = 0;
} else {
// Continue collecting bytes into param
self.param = self.param.saturating_mul(10);
self.param = self.param.saturating_add((byte - b'0') as u16);
}
}
Action::Clear => {
// Reset everything on ESC/CSI/DCS entry
self.intermediate_idx = 0;
self.ignoring = false;
self.param = 0;
self.params.clear();
}
Action::BeginUtf8 => self.process_utf8(performer, byte),
Action::Ignore => (),
Action::Nop => (),
}
}
}
/// Build a `char` out of bytes
pub trait CharAccumulator: Default {
/// Build a `char` out of bytes
///
/// Return `None` when more data is needed
fn add(&mut self, byte: u8) -> Option<char>;
}
#[cfg(feature = "utf8")]
pub type DefaultCharAccumulator = Utf8Parser;
#[cfg(not(feature = "utf8"))]
pub type DefaultCharAccumulator = AsciiParser;
/// Only allow parsing 7-bit ASCII
#[derive(Default, Clone, Debug, PartialEq, Eq)]
pub struct AsciiParser;
impl CharAccumulator for AsciiParser {
fn add(&mut self, _byte: u8) -> Option<char> {
unreachable!("multi-byte UTF8 characters are unsupported")
}
}
/// Allow parsing UTF-8
#[cfg(feature = "utf8")]
#[derive(Default, Clone, Debug, PartialEq, Eq)]
pub struct Utf8Parser {
utf8_parser: utf8::Parser,
}
#[cfg(feature = "utf8")]
impl CharAccumulator for Utf8Parser {
fn add(&mut self, byte: u8) -> Option<char> {
let mut c = None;
let mut receiver = VtUtf8Receiver(&mut c);
self.utf8_parser.advance(&mut receiver, byte);
c
}
}
#[cfg(feature = "utf8")]
struct VtUtf8Receiver<'a>(&'a mut Option<char>);
#[cfg(feature = "utf8")]
impl<'a> utf8::Receiver for VtUtf8Receiver<'a> {
fn codepoint(&mut self, c: char) {
*self.0 = Some(c);
}
fn invalid_sequence(&mut self) {
*self.0 = Some('<27>');
}
}
/// Performs actions requested by the [`Parser`]
///
/// Actions in this case mean, for example, handling a CSI escape sequence describing cursor
/// movement, or simply printing characters to the screen.
///
/// The methods on this type correspond to actions described in
/// <http://vt100.net/emu/dec_ansi_parser>. I've done my best to describe them in
/// a useful way in my own words for completeness, but the site should be
/// referenced if something isn't clear. If the site disappears at some point in
/// the future, consider checking archive.org.
pub trait Perform {
/// Draw a character to the screen and update states.
fn print(&mut self, _c: char) {}
/// Execute a C0 or C1 control function.
fn execute(&mut self, _byte: u8) {}
/// Invoked when a final character arrives in first part of device control string.
///
/// The control function should be determined from the private marker, final character, and
/// execute with a parameter list. A handler should be selected for remaining characters in the
/// string; the handler function should subsequently be called by `put` for every character in
/// the control string.
///
/// The `ignore` flag indicates that more than two intermediates arrived and
/// subsequent characters were ignored.
fn hook(&mut self, _params: &Params, _intermediates: &[u8], _ignore: bool, _action: u8) {}
/// Pass bytes as part of a device control string to the handle chosen in `hook`. C0 controls
/// will also be passed to the handler.
fn put(&mut self, _byte: u8) {}
/// Called when a device control string is terminated.
///
/// The previously selected handler should be notified that the DCS has
/// terminated.
fn unhook(&mut self) {}
/// Dispatch an operating system command.
fn osc_dispatch(&mut self, _params: &[&[u8]], _bell_terminated: bool) {}
/// A final character has arrived for a CSI sequence
///
/// The `ignore` flag indicates that either more than two intermediates arrived
/// or the number of parameters exceeded the maximum supported length,
/// and subsequent characters were ignored.
fn csi_dispatch(
&mut self,
_params: &Params,
_intermediates: &[u8],
_ignore: bool,
_action: u8,
) {
}
/// The final character of an escape sequence has arrived.
///
/// The `ignore` flag indicates that more than two intermediates arrived and
/// subsequent characters were ignored.
fn esc_dispatch(&mut self, _intermediates: &[u8], _ignore: bool, _byte: u8) {}
}

View File

@@ -1,143 +0,0 @@
//! Fixed size parameters list with optional subparameters.
use core::fmt::{self, Debug, Formatter};
pub(crate) const MAX_PARAMS: usize = 32;
#[derive(Default, Clone, PartialEq, Eq)]
pub struct Params {
/// Number of subparameters for each parameter.
///
/// For each entry in the `params` slice, this stores the length of the param as number of
/// subparams at the same index as the param in the `params` slice.
///
/// At the subparam positions the length will always be `0`.
subparams: [u8; MAX_PARAMS],
/// All parameters and subparameters.
params: [u16; MAX_PARAMS],
/// Number of suparameters in the current parameter.
current_subparams: u8,
/// Total number of parameters and subparameters.
len: usize,
}
impl Params {
/// Returns the number of parameters.
#[inline]
pub fn len(&self) -> usize {
self.len
}
/// Returns `true` if there are no parameters present.
#[inline]
pub fn is_empty(&self) -> bool {
self.len == 0
}
/// Returns an iterator over all parameters and subparameters.
#[inline]
pub fn iter(&self) -> ParamsIter<'_> {
ParamsIter::new(self)
}
/// Returns `true` if there is no more space for additional parameters.
#[inline]
pub(crate) fn is_full(&self) -> bool {
self.len == MAX_PARAMS
}
/// Clear all parameters.
#[inline]
pub(crate) fn clear(&mut self) {
self.current_subparams = 0;
self.len = 0;
}
/// Add an additional parameter.
#[inline]
pub(crate) fn push(&mut self, item: u16) {
self.subparams[self.len - self.current_subparams as usize] = self.current_subparams + 1;
self.params[self.len] = item;
self.current_subparams = 0;
self.len += 1;
}
/// Add an additional subparameter to the current parameter.
#[inline]
pub(crate) fn extend(&mut self, item: u16) {
self.subparams[self.len - self.current_subparams as usize] = self.current_subparams + 1;
self.params[self.len] = item;
self.current_subparams += 1;
self.len += 1;
}
}
impl<'a> IntoIterator for &'a Params {
type IntoIter = ParamsIter<'a>;
type Item = &'a [u16];
fn into_iter(self) -> Self::IntoIter {
self.iter()
}
}
/// Immutable subparameter iterator.
pub struct ParamsIter<'a> {
params: &'a Params,
index: usize,
}
impl<'a> ParamsIter<'a> {
fn new(params: &'a Params) -> Self {
Self { params, index: 0 }
}
}
impl<'a> Iterator for ParamsIter<'a> {
type Item = &'a [u16];
fn next(&mut self) -> Option<Self::Item> {
if self.index >= self.params.len() {
return None;
}
// Get all subparameters for the current parameter.
let num_subparams = self.params.subparams[self.index];
let param = &self.params.params[self.index..self.index + num_subparams as usize];
// Jump to the next parameter.
self.index += num_subparams as usize;
Some(param)
}
fn size_hint(&self) -> (usize, Option<usize>) {
let remaining = self.params.len() - self.index;
(remaining, Some(remaining))
}
}
impl Debug for Params {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
write!(f, "[")?;
for (i, param) in self.iter().enumerate() {
if i != 0 {
write!(f, ";")?;
}
for (i, subparam) in param.iter().enumerate() {
if i != 0 {
write!(f, ":")?;
}
subparam.fmt(f)?;
}
}
write!(f, "]")
}
}

View File

@@ -1,217 +0,0 @@
use super::{pack, unpack, Action, State};
use vte_generate_state_changes::generate_state_changes;
#[test]
fn table() {
let mut content = vec![];
generate_table(&mut content).unwrap();
let content = String::from_utf8(content).unwrap();
let content = codegenrs::rustfmt(&content, None).unwrap();
snapbox::assert_eq_path("./src/state/table.rs", content);
}
#[allow(clippy::write_literal)]
fn generate_table(file: &mut impl std::io::Write) -> std::io::Result<()> {
writeln!(
file,
"// This file is @generated by {}",
file!().replace('\\', "/")
)?;
writeln!(file)?;
writeln!(
file,
"{}",
r#"#[rustfmt::skip]
pub(crate) const STATE_CHANGES: [[u8; 256]; 16] = ["#
)?;
for (state, entries) in STATE_CHANGES.iter().enumerate() {
writeln!(file, " // {:?}", State::try_from(state as u8).unwrap())?;
write!(file, " [")?;
let mut last_entry = None;
for packed in entries {
let (next_state, action) = unpack(*packed);
if last_entry != Some(packed) {
writeln!(file)?;
writeln!(file, " // {:?} {:?}", next_state, action)?;
write!(file, " ")?;
}
write!(file, "0x{:0>2x}, ", packed)?;
last_entry = Some(packed);
}
writeln!(file)?;
writeln!(file, " ],")?;
}
writeln!(file, "{}", r#"];"#)?;
Ok(())
}
/// This is the state change table. It's indexed first by current state and then by the next
/// character in the pty stream.
pub static STATE_CHANGES: [[u8; 256]; 16] = state_changes();
generate_state_changes!(state_changes, {
Anywhere {
0x18 => (Ground, Execute),
0x1a => (Ground, Execute),
0x1b => (Escape, Nop),
},
Ground {
0x00..=0x17 => (Anywhere, Execute),
0x19 => (Anywhere, Execute),
0x1c..=0x1f => (Anywhere, Execute),
0x20..=0x7f => (Anywhere, Print),
0x80..=0x8f => (Anywhere, Execute),
0x91..=0x9a => (Anywhere, Execute),
0x9c => (Anywhere, Execute),
// Beginning of UTF-8 2 byte sequence
0xc2..=0xdf => (Utf8, BeginUtf8),
// Beginning of UTF-8 3 byte sequence
0xe0..=0xef => (Utf8, BeginUtf8),
// Beginning of UTF-8 4 byte sequence
0xf0..=0xf4 => (Utf8, BeginUtf8),
},
Escape {
0x00..=0x17 => (Anywhere, Execute),
0x19 => (Anywhere, Execute),
0x1c..=0x1f => (Anywhere, Execute),
0x7f => (Anywhere, Ignore),
0x20..=0x2f => (EscapeIntermediate, Collect),
0x30..=0x4f => (Ground, EscDispatch),
0x51..=0x57 => (Ground, EscDispatch),
0x59 => (Ground, EscDispatch),
0x5a => (Ground, EscDispatch),
0x5c => (Ground, EscDispatch),
0x60..=0x7e => (Ground, EscDispatch),
0x5b => (CsiEntry, Nop),
0x5d => (OscString, Nop),
0x50 => (DcsEntry, Nop),
0x58 => (SosPmApcString, Nop),
0x5e => (SosPmApcString, Nop),
0x5f => (SosPmApcString, Nop),
},
EscapeIntermediate {
0x00..=0x17 => (Anywhere, Execute),
0x19 => (Anywhere, Execute),
0x1c..=0x1f => (Anywhere, Execute),
0x20..=0x2f => (Anywhere, Collect),
0x7f => (Anywhere, Ignore),
0x30..=0x7e => (Ground, EscDispatch),
},
CsiEntry {
0x00..=0x17 => (Anywhere, Execute),
0x19 => (Anywhere, Execute),
0x1c..=0x1f => (Anywhere, Execute),
0x7f => (Anywhere, Ignore),
0x20..=0x2f => (CsiIntermediate, Collect),
0x30..=0x39 => (CsiParam, Param),
0x3a..=0x3b => (CsiParam, Param),
0x3c..=0x3f => (CsiParam, Collect),
0x40..=0x7e => (Ground, CsiDispatch),
},
CsiIgnore {
0x00..=0x17 => (Anywhere, Execute),
0x19 => (Anywhere, Execute),
0x1c..=0x1f => (Anywhere, Execute),
0x20..=0x3f => (Anywhere, Ignore),
0x7f => (Anywhere, Ignore),
0x40..=0x7e => (Ground, Nop),
},
CsiParam {
0x00..=0x17 => (Anywhere, Execute),
0x19 => (Anywhere, Execute),
0x1c..=0x1f => (Anywhere, Execute),
0x30..=0x39 => (Anywhere, Param),
0x3a..=0x3b => (Anywhere, Param),
0x7f => (Anywhere, Ignore),
0x3c..=0x3f => (CsiIgnore, Nop),
0x20..=0x2f => (CsiIntermediate, Collect),
0x40..=0x7e => (Ground, CsiDispatch),
},
CsiIntermediate {
0x00..=0x17 => (Anywhere, Execute),
0x19 => (Anywhere, Execute),
0x1c..=0x1f => (Anywhere, Execute),
0x20..=0x2f => (Anywhere, Collect),
0x7f => (Anywhere, Ignore),
0x30..=0x3f => (CsiIgnore, Nop),
0x40..=0x7e => (Ground, CsiDispatch),
},
DcsEntry {
0x00..=0x17 => (Anywhere, Ignore),
0x19 => (Anywhere, Ignore),
0x1c..=0x1f => (Anywhere, Ignore),
0x7f => (Anywhere, Ignore),
0x20..=0x2f => (DcsIntermediate, Collect),
0x30..=0x39 => (DcsParam, Param),
0x3a..=0x3b => (DcsParam, Param),
0x3c..=0x3f => (DcsParam, Collect),
0x40..=0x7e => (DcsPassthrough, Nop),
},
DcsIntermediate {
0x00..=0x17 => (Anywhere, Ignore),
0x19 => (Anywhere, Ignore),
0x1c..=0x1f => (Anywhere, Ignore),
0x20..=0x2f => (Anywhere, Collect),
0x7f => (Anywhere, Ignore),
0x30..=0x3f => (DcsIgnore, Nop),
0x40..=0x7e => (DcsPassthrough, Nop),
},
DcsIgnore {
0x00..=0x17 => (Anywhere, Ignore),
0x19 => (Anywhere, Ignore),
0x1c..=0x1f => (Anywhere, Ignore),
0x20..=0x7f => (Anywhere, Ignore),
0x9c => (Ground, Nop),
},
DcsParam {
0x00..=0x17 => (Anywhere, Ignore),
0x19 => (Anywhere, Ignore),
0x1c..=0x1f => (Anywhere, Ignore),
0x30..=0x39 => (Anywhere, Param),
0x3a..=0x3b => (Anywhere, Param),
0x7f => (Anywhere, Ignore),
0x3c..=0x3f => (DcsIgnore, Nop),
0x20..=0x2f => (DcsIntermediate, Collect),
0x40..=0x7e => (DcsPassthrough, Nop),
},
DcsPassthrough {
0x00..=0x17 => (Anywhere, Put),
0x19 => (Anywhere, Put),
0x1c..=0x1f => (Anywhere, Put),
0x20..=0x7e => (Anywhere, Put),
0x7f => (Anywhere, Ignore),
0x9c => (Ground, Nop),
},
SosPmApcString {
0x00..=0x17 => (Anywhere, Ignore),
0x19 => (Anywhere, Ignore),
0x1c..=0x1f => (Anywhere, Ignore),
0x20..=0x7f => (Anywhere, Ignore),
0x9c => (Ground, Nop),
},
OscString {
0x00..=0x06 => (Anywhere, Ignore),
0x07 => (Ground, Nop),
0x08..=0x17 => (Anywhere, Ignore),
0x19 => (Anywhere, Ignore),
0x1c..=0x1f => (Anywhere, Ignore),
0x20..=0xff => (Anywhere, OscPut),
}
});

View File

@@ -1,169 +0,0 @@
use core::mem;
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
#[repr(u8)]
#[derive(Default)]
pub enum State {
Anywhere = 0,
CsiEntry = 1,
CsiIgnore = 2,
CsiIntermediate = 3,
CsiParam = 4,
DcsEntry = 5,
DcsIgnore = 6,
DcsIntermediate = 7,
DcsParam = 8,
DcsPassthrough = 9,
Escape = 10,
EscapeIntermediate = 11,
#[default]
Ground = 12,
OscString = 13,
SosPmApcString = 14,
Utf8 = 15,
}
impl TryFrom<u8> for State {
type Error = u8;
#[inline(always)]
fn try_from(raw: u8) -> Result<Self, Self::Error> {
STATES.get(raw as usize).ok_or(raw).copied()
}
}
const STATES: [State; 16] = [
State::Anywhere,
State::CsiEntry,
State::CsiIgnore,
State::CsiIntermediate,
State::CsiParam,
State::DcsEntry,
State::DcsIgnore,
State::DcsIntermediate,
State::DcsParam,
State::DcsPassthrough,
State::Escape,
State::EscapeIntermediate,
State::Ground,
State::OscString,
State::SosPmApcString,
State::Utf8,
];
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
#[repr(u8)]
#[derive(Default)]
pub enum Action {
#[default]
Nop = 0,
Clear = 1,
Collect = 2,
CsiDispatch = 3,
EscDispatch = 4,
Execute = 5,
Hook = 6,
Ignore = 7,
OscEnd = 8,
OscPut = 9,
OscStart = 10,
Param = 11,
Print = 12,
Put = 13,
Unhook = 14,
BeginUtf8 = 15,
}
impl TryFrom<u8> for Action {
type Error = u8;
#[inline(always)]
fn try_from(raw: u8) -> Result<Self, Self::Error> {
ACTIONS.get(raw as usize).ok_or(raw).copied()
}
}
const ACTIONS: [Action; 16] = [
Action::Nop,
Action::Clear,
Action::Collect,
Action::CsiDispatch,
Action::EscDispatch,
Action::Execute,
Action::Hook,
Action::Ignore,
Action::OscEnd,
Action::OscPut,
Action::OscStart,
Action::Param,
Action::Print,
Action::Put,
Action::Unhook,
Action::BeginUtf8,
];
/// Unpack a u8 into a State and Action
///
/// The implementation of this assumes that there are *precisely* 16 variants for both Action and
/// State. Furthermore, it assumes that the enums are tag-only; that is, there is no data in any
/// variant.
///
/// Bad things will happen if those invariants are violated.
#[inline(always)]
pub const fn unpack(delta: u8) -> (State, Action) {
unsafe {
(
// State is stored in bottom 4 bits
mem::transmute(delta & 0x0f),
// Action is stored in top 4 bits
mem::transmute(delta >> 4),
)
}
}
#[inline(always)]
#[cfg(test)]
pub const fn pack(state: State, action: Action) -> u8 {
(action as u8) << 4 | state as u8
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn unpack_state_action() {
match unpack(0xee) {
(State::SosPmApcString, Action::Unhook) => (),
_ => panic!("unpack failed"),
}
match unpack(0x0f) {
(State::Utf8, Action::Nop) => (),
_ => panic!("unpack failed"),
}
match unpack(0xff) {
(State::Utf8, Action::BeginUtf8) => (),
_ => panic!("unpack failed"),
}
}
#[test]
fn pack_state_action() {
match unpack(0xee) {
(State::SosPmApcString, Action::Unhook) => (),
_ => panic!("unpack failed"),
}
match unpack(0x0f) {
(State::Utf8, Action::Nop) => (),
_ => panic!("unpack failed"),
}
match unpack(0xff) {
(State::Utf8, Action::BeginUtf8) => (),
_ => panic!("unpack failed"),
}
}
}

View File

@@ -1,41 +0,0 @@
#[cfg(test)]
mod codegen;
mod definitions;
mod table;
#[cfg(test)]
pub(crate) use definitions::pack;
pub(crate) use definitions::unpack;
pub use definitions::Action;
pub use definitions::State;
/// Transition to next [`State`]
///
/// Note: This does not directly support UTF-8.
/// - If the data is validated as UTF-8 (e.g. `str`) or single-byte C1 control codes are
/// unsupported, then treat [`Action::BeginUtf8`] and [`Action::Execute`] for UTF-8 continuations
/// as [`Action::Print`].
/// - If the data is not validated, then a UTF-8 state machine will need to be implemented on top,
/// starting with [`Action::BeginUtf8`].
///
/// Note: When [`State::Anywhere`] is returned, revert back to the prior state.
#[inline]
pub const fn state_change(state: State, byte: u8) -> (State, Action) {
// Handle state changes in the anywhere state before evaluating changes
// for current state.
let mut change = state_change_(State::Anywhere, byte);
if change == 0 {
change = state_change_(state, byte);
}
// Unpack into a state and action
unpack(change)
}
#[inline]
const fn state_change_(state: State, byte: u8) -> u8 {
let state_idx = state as usize;
let byte_idx = byte as usize;
table::STATE_CHANGES[state_idx][byte_idx]
}

View File

@@ -1,361 +0,0 @@
// This file is @generated by crates/anstyle-parse/src/state/codegen.rs
#[rustfmt::skip]
pub(crate) const STATE_CHANGES: [[u8; 256]; 16] = [
// Anywhere
[
// Anywhere Nop
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
// Ground Execute
0x5c,
// Anywhere Nop
0x00,
// Ground Execute
0x5c,
// Escape Nop
0x0a,
// Anywhere Nop
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
],
// CsiEntry
[
// Anywhere Execute
0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50,
// Anywhere Nop
0x00,
// Anywhere Execute
0x50,
// Anywhere Nop
0x00, 0x00,
// Anywhere Execute
0x50, 0x50, 0x50, 0x50,
// CsiIntermediate Collect
0x23, 0x23, 0x23, 0x23, 0x23, 0x23, 0x23, 0x23, 0x23, 0x23, 0x23, 0x23, 0x23, 0x23, 0x23, 0x23,
// CsiParam Param
0xb4, 0xb4, 0xb4, 0xb4, 0xb4, 0xb4, 0xb4, 0xb4, 0xb4, 0xb4, 0xb4, 0xb4,
// CsiParam Collect
0x24, 0x24, 0x24, 0x24,
// Ground CsiDispatch
0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c,
// Anywhere Ignore
0x70,
// Anywhere Nop
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
],
// CsiIgnore
[
// Anywhere Execute
0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50,
// Anywhere Nop
0x00,
// Anywhere Execute
0x50,
// Anywhere Nop
0x00, 0x00,
// Anywhere Execute
0x50, 0x50, 0x50, 0x50,
// Anywhere Ignore
0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70,
// Ground Nop
0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c,
// Anywhere Ignore
0x70,
// Anywhere Nop
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
],
// CsiIntermediate
[
// Anywhere Execute
0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50,
// Anywhere Nop
0x00,
// Anywhere Execute
0x50,
// Anywhere Nop
0x00, 0x00,
// Anywhere Execute
0x50, 0x50, 0x50, 0x50,
// Anywhere Collect
0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20,
// CsiIgnore Nop
0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02,
// Ground CsiDispatch
0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c,
// Anywhere Ignore
0x70,
// Anywhere Nop
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
],
// CsiParam
[
// Anywhere Execute
0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50,
// Anywhere Nop
0x00,
// Anywhere Execute
0x50,
// Anywhere Nop
0x00, 0x00,
// Anywhere Execute
0x50, 0x50, 0x50, 0x50,
// CsiIntermediate Collect
0x23, 0x23, 0x23, 0x23, 0x23, 0x23, 0x23, 0x23, 0x23, 0x23, 0x23, 0x23, 0x23, 0x23, 0x23, 0x23,
// Anywhere Param
0xb0, 0xb0, 0xb0, 0xb0, 0xb0, 0xb0, 0xb0, 0xb0, 0xb0, 0xb0, 0xb0, 0xb0,
// CsiIgnore Nop
0x02, 0x02, 0x02, 0x02,
// Ground CsiDispatch
0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c, 0x3c,
// Anywhere Ignore
0x70,
// Anywhere Nop
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
],
// DcsEntry
[
// Anywhere Ignore
0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70,
// Anywhere Nop
0x00,
// Anywhere Ignore
0x70,
// Anywhere Nop
0x00, 0x00,
// Anywhere Ignore
0x70, 0x70, 0x70, 0x70,
// DcsIntermediate Collect
0x27, 0x27, 0x27, 0x27, 0x27, 0x27, 0x27, 0x27, 0x27, 0x27, 0x27, 0x27, 0x27, 0x27, 0x27, 0x27,
// DcsParam Param
0xb8, 0xb8, 0xb8, 0xb8, 0xb8, 0xb8, 0xb8, 0xb8, 0xb8, 0xb8, 0xb8, 0xb8,
// DcsParam Collect
0x28, 0x28, 0x28, 0x28,
// DcsPassthrough Nop
0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09,
// Anywhere Ignore
0x70,
// Anywhere Nop
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
],
// DcsIgnore
[
// Anywhere Ignore
0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70,
// Anywhere Nop
0x00,
// Anywhere Ignore
0x70,
// Anywhere Nop
0x00, 0x00,
// Anywhere Ignore
0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70,
// Anywhere Nop
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
// Ground Nop
0x0c,
// Anywhere Nop
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
],
// DcsIntermediate
[
// Anywhere Ignore
0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70,
// Anywhere Nop
0x00,
// Anywhere Ignore
0x70,
// Anywhere Nop
0x00, 0x00,
// Anywhere Ignore
0x70, 0x70, 0x70, 0x70,
// Anywhere Collect
0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20,
// DcsIgnore Nop
0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06,
// DcsPassthrough Nop
0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09,
// Anywhere Ignore
0x70,
// Anywhere Nop
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
],
// DcsParam
[
// Anywhere Ignore
0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70,
// Anywhere Nop
0x00,
// Anywhere Ignore
0x70,
// Anywhere Nop
0x00, 0x00,
// Anywhere Ignore
0x70, 0x70, 0x70, 0x70,
// DcsIntermediate Collect
0x27, 0x27, 0x27, 0x27, 0x27, 0x27, 0x27, 0x27, 0x27, 0x27, 0x27, 0x27, 0x27, 0x27, 0x27, 0x27,
// Anywhere Param
0xb0, 0xb0, 0xb0, 0xb0, 0xb0, 0xb0, 0xb0, 0xb0, 0xb0, 0xb0, 0xb0, 0xb0,
// DcsIgnore Nop
0x06, 0x06, 0x06, 0x06,
// DcsPassthrough Nop
0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09,
// Anywhere Ignore
0x70,
// Anywhere Nop
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
],
// DcsPassthrough
[
// Anywhere Put
0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0,
// Anywhere Nop
0x00,
// Anywhere Put
0xd0,
// Anywhere Nop
0x00, 0x00,
// Anywhere Put
0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0, 0xd0,
// Anywhere Ignore
0x70,
// Anywhere Nop
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
// Ground Nop
0x0c,
// Anywhere Nop
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
],
// Escape
[
// Anywhere Execute
0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50,
// Anywhere Nop
0x00,
// Anywhere Execute
0x50,
// Anywhere Nop
0x00, 0x00,
// Anywhere Execute
0x50, 0x50, 0x50, 0x50,
// EscapeIntermediate Collect
0x2b, 0x2b, 0x2b, 0x2b, 0x2b, 0x2b, 0x2b, 0x2b, 0x2b, 0x2b, 0x2b, 0x2b, 0x2b, 0x2b, 0x2b, 0x2b,
// Ground EscDispatch
0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c,
// DcsEntry Nop
0x05,
// Ground EscDispatch
0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c,
// SosPmApcString Nop
0x0e,
// Ground EscDispatch
0x4c, 0x4c,
// CsiEntry Nop
0x01,
// Ground EscDispatch
0x4c,
// OscString Nop
0x0d,
// SosPmApcString Nop
0x0e, 0x0e,
// Ground EscDispatch
0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c,
// Anywhere Ignore
0x70,
// Anywhere Nop
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
],
// EscapeIntermediate
[
// Anywhere Execute
0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50,
// Anywhere Nop
0x00,
// Anywhere Execute
0x50,
// Anywhere Nop
0x00, 0x00,
// Anywhere Execute
0x50, 0x50, 0x50, 0x50,
// Anywhere Collect
0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20,
// Ground EscDispatch
0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c,
// Anywhere Ignore
0x70,
// Anywhere Nop
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
],
// Ground
[
// Anywhere Execute
0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50,
// Anywhere Nop
0x00,
// Anywhere Execute
0x50,
// Anywhere Nop
0x00, 0x00,
// Anywhere Execute
0x50, 0x50, 0x50, 0x50,
// Anywhere Print
0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0,
// Anywhere Execute
0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50,
// Anywhere Nop
0x00,
// Anywhere Execute
0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50, 0x50,
// Anywhere Nop
0x00,
// Anywhere Execute
0x50,
// Anywhere Nop
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
// Utf8 BeginUtf8
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
// Anywhere Nop
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
],
// OscString
[
// Anywhere Ignore
0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70,
// Ground Nop
0x0c,
// Anywhere Ignore
0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70,
// Anywhere Nop
0x00,
// Anywhere Ignore
0x70,
// Anywhere Nop
0x00, 0x00,
// Anywhere Ignore
0x70, 0x70, 0x70, 0x70,
// Anywhere OscPut
0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90,
],
// SosPmApcString
[
// Anywhere Ignore
0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70,
// Anywhere Nop
0x00,
// Anywhere Ignore
0x70,
// Anywhere Nop
0x00, 0x00,
// Anywhere Ignore
0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70, 0x70,
// Anywhere Nop
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
// Ground Nop
0x0c,
// Anywhere Nop
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
],
// Utf8
[
// Anywhere Nop
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
],
];

View File

@@ -1 +0,0 @@
{"files":{"Cargo.lock":"dbef8401bf3e334d4f3a8230f2506dbc1439dd3aea07cbbc174125eb5fef0eed","Cargo.toml":"4b85a1d05db43bc0aa4ccc814c9e6b922d6a811aa79e02e614f5baccfa803a05","LICENSE-APACHE":"c6596eb7be8581c18be736c846fb9173b69eccf6ef94c5135893ec56bd92ba08","LICENSE-MIT":"6efb0476a1cc085077ed49357026d8c173bf33017278ef440f222fb9cbcb66e6","README.md":"94cda3914d2693b89e0b5855ffff04b971823f6cbae885a1610353254a269ed9","examples/query.rs":"d9f5b94967c7b9579ee399c481148b07fd0fb371f4a5d557017ff86cb5034543","src/lib.rs":"4dd716dbe701acc5644b25d84503d53ea8bc8fb9bf81914b2183526edc63826c","src/windows.rs":"44272f13079fbaed8c16426950c24d9de94d81eba0636138d8e44346eccc0acd"},"package":"e28923312444cdd728e4738b3f9c9cac739500909bb3d3c94b43551b16517648"}

Some files were not shown because too many files have changed in this diff Show More