This replaces the errol backend with one based on ryu. The 128-bit
backend only is implemented. This supports all floating-point types and
does not use fp logic to print.
Closes#1181.
Closes#1299.
Closes#3612.
* `linux.IO_Uring` -> `linux.IoUring` to align with naming conventions.
* All functions `io_uring_prep_foo` are now methods `prep_foo` on `io_uring_sqe`, which is in a file of its own.
* `SubmissionQueue` and `CompletionQueue` are namespaced under `IoUring`.
This is a breaking change.
The new file and namespace layouts are more idiomatic, and allow us to
eliminate one more usage of `usingnamespace` from the standard library.
2 remain.
Some of the structs I shuffled around might be better namespaced under
CONTEXT, I'm not sure. However, for now, this approach preserves
backwards compatibility.
Eliminates one more usage of `usingnamespace` from the standard library.
3 remain.
Thanks to Zig's lazy analysis, it's fine for these symbols to be
declared on platform they won't exist on. This is already done in
several places in this file; e.g. `pthread` functions are declared
unconditionally.
Eliminates one more usage of `usingnamespace` from the standard library.
4 remain.
Searching GitHub indicated that the only use of these types in the wild is
support in getty-zig, and testing for that support. This eliminates 3 more uses
of usingnamespace from the standard library, and removes some unnecessarily
complex generic code.
This usage of `usingnamespace` was removed fairly trivially - the
resulting code is, IMO, more clear.
Eliminates one more usage of `usingnamespace` from the standard library.
This is a trivial change - this code did `usingnamespace` into an
otherwise empty namespace, so the outer namespace was just unnecessary.
Eliminates one more usage of `usingnamespace` from the standard library.
This fixes an issue with the implementation of #18816. Consider the
following code:
```zig
pub fn Wrap(comptime T: type) type {
return struct {
pub const T1 = T;
inner: struct { x: T1 },
};
}
```
Previously, the type of `inner` was not considered to be "capturing" any
value, as `T1` is a decl. However, since it is declared within a generic
function, this decl reference depends on the context, and thus should be
treated as a capture.
AstGen has been augmented to tunnel references to decls through closure
when the decl was declared in a potentially-generic context (i.e. within
a function).
This changes the representation of closures in Zir and Sema. Rather than
a pair of instructions `closure_capture` and `closure_get`, the system
now works as follows:
* Each ZIR type declaration (`struct_decl` etc) contains a list of
captures in the form of ZIR indices (or, for efficiency, direct
references to parent captures). This is an ordered list; indexes into
it are used to refer to captured values.
* The `extended(closure_get)` ZIR instruction refers to a value in this
list via a 16-bit index (limiting this index to 16 bits allows us to
store this in `extended`).
* `Module.Namespace` has a new field `captures` which contains the list
of values captured in a given namespace. This is initialized based on
the ZIR capture list whenever a type declaration is analyzed.
This change eliminates `CaptureScope` from semantic analysis, which is a
nice simplification; but the main motivation here is that this change is
a prerequisite for #18816.
fill(0) will fill all bytes in bit reader. If bit reader is aligned to
the byte, as it is at the end of the stream this ensures no overshoot
when reading footer. Footer is 4 bytes (zlib) or 8 bytes (gzip). For
zlib we will use 4 bytes BitReader and 8 for gzip. After align and fill
we will read those bytes and leave BitReader empty after that.
* Introduce `-Ddebug-extensions` for enabling compiler debug helpers
* Replace safety mode checks with `std.debug.runtime_safety`
* Replace debugger helper checks with `!builtin.strip_debug_info`
Sometimes, you just have to debug optimized compilers...
Thanks to jacobly0 for figuring this out. The chain of events causing
the failure this triggered is as follows.
* As of a recent commit, certain bodies no longer emit a redundant
`block`, meaning there are more likely to be "interesting"
instructions (i.e. not blocks) at the end of parent GenZir scopes.
* When emitting the first `dbg_stmt` in such a body, the elision logic
incorrectly looks at a tag from an instruction in an enclosing scope.
* The tag of this instruction may be `undefined`, meaning that in unsafe
builds it may be incorrectly identified as a `dbg_stmt` instruction.
* This instruction from another body is clobbered rather than emitting
an actual `dbg_stmt` instruction. Note that this does not produce
invalid ZIR, since the creator of the undefined instruction replaces
the previously-undefined payload later.
Now, all the tests that use `testWithAllSupportedPathTypes` will also run each test with both `/` and `\` as the path separator on Windows.
Also, removes the now-redundant "Dir.symLink with relative target that has a / path separator" since the same thing is now tested in the "Dir.readLink" test
In the code `if (cond) { ... }`, the "then body" of the `if` is
technically a block. However, we don't need to emit a real ZIR `block`
corresponding to it, because we are already within a condbr body; we
have a separate gz, and appropriate scoping for allocs and debug
variables. In this case, and many like it, we can trivially elide the
block here, instead emitting the block statements directly into the
current `GenZir`. This results in a significant decrease in ZIR bytes
for real code.
Just to pass ci of regression fix#19126.
I'll return to this later.
Currently can't reproduce on my Windows wm, here I'm failing on symlink creation
in ci fails later in the process.
In the iterator function which is the low-level API, don't depend on
`std.fs.MAX_PATH_BYTES` because this is not defined on all operating
systems, such as freestanding.
However in such environments it still makes sense to be able to extract
from a tar file.
An even more flexible solution would be to accept the buffers as
arguments to iterator() which I think is a good idea, but for now let's
just set the same upper limmit across all operating systems.
Part of #19063.
Primarily, this moves Aro from deps/ to lib/compiler/ so that it can be
lazily compiled from source. src/aro_translate_c.zig is moved to
lib/compiler/aro_translate_c.zig and some of Zig CLI logic moved to a
main() function there.
aro_translate_c.zig becomes the "common" import for clang-based
translate-c.
Not all of the compiler was able to be detangled from Aro, however, so
it still, for now, remains being compiled with the main compiler
sources due to the clang-based translate-c depending on it. Once
aro-based translate-c achieves feature parity with the clang-based
translate-c implementation, the clang-based one can be removed from Zig.
Aro made it unnecessarily difficult to depend on with these .def files
and all these Zig module requirements. I looked at the .def files and
made these observations:
- The canonical source is llvm .def files.
- Therefore there is an update process to sync with llvm that involves
regenerating the .def files in Aro.
- Therefore you might as well just regenerate the .zig files directly
and check those into Aro.
- Also with a small amount of tinkering, the file size on disk of these
generated .zig files can be made many times smaller, without
compromising type safety in the usage of the data.
This would make things much easier on Zig as downstream project,
particularly we could remove those pesky stubs when bootstrapping.
I have gone ahead with these changes since they unblock me and I will
have a chat with Vexu to see what he thinks.
InvalidHandle in OpenError is no longer a possible error on any platform. In the past it was able to be returned in `openOptionsFromFlagsWasi`, but the implementation was changed in 7680c5330c to make it no longer possible.
InvalidHandle in RealPathError was a holdover from before d5312d53a0, which made realpath a compile error on WASI. However, InvalidHandle was also a possible error in the FreeBSD fallback implementation added in 537624734c. This commit changes the FreeBSD fallback implementation to return FileNotFound instead of InvalidHandle which matches how EBADF is handled in all the other `realpath` implementations (including the FreeBSD non-fallback implementation).
Closes#19084
* Support different keep alive defaults with different http versions.
* Fix incorrect usage of `copyBackwards`, which copies in a backwards
direction allowing data to be moved forward in a buffer, not
backwards in a buffer.
Follow up to #19079, which made test names fully qualified.
This fixes tests that now-redundant information in their test names. For example here's a fully qualified test name before the changes in this commit:
"priority_queue.test.std.PriorityQueue: shrinkAndFree"
and the same test's name after the changes in this commit:
"priority_queue.test.shrinkAndFree"
* make test names contain the fully qualified name
* make test filters match the fully qualified name
* allow multiple test filters, where a test is skipped if it does not
match any of the specified filters
Found while fuzzing. Previously 1.1897314953572317650857593266280070162E4932
was parsed as +inf, which caused issues for round-trip serialization of
floats. Only f128 had issues, but have added other tests for all
floating point large normals.
The max_exponent for f128 was wrong, it is subtly different in the
decimal code-path as it is based on where the decimal digit should go.
This needs to be 2 greater than the max exponent (e.g. 308 or 4932) to
work correctly (greater by 1, then we use a >= comparision).
In addition, I've removed the redundant `optimize` constant which was only
use for testing the slow path locally.
std.heap.c_allocator was already doing this, however,
std.heap.raw_c_allocator, which asserts no allocations more than 16
bytes aligned, was not.
The zig compiler uses std.heap.raw_c_allocator, so it is affected by
this.
Like in issue #18089, this tar contains, same file name in two case
sensitive name version. Unpack should fail on case insensitive file
systems and succeed on case sensitive.
$ tar tvf 18089.tar
18089/
18089/alacritty/
18089/alacritty/darkermatrix.yml
18089/alacritty/Darkermatrix.yml
Windows paths now use WTF-16 <-> WTF-8 conversion everywhere, which is lossless. Previously, conversion of ill-formed UTF-16 paths would either fail or invoke illegal behavior.
WASI paths must be valid UTF-8, and the relevant function calls have been updated to handle the possibility of failure due to paths not being encoded/encodable as valid UTF-8.
Closes#18694Closes#1774Closes#2565
Ill-formed UTF-8 byte sequences are replaced by the replacement character (U+FFFD) according to "U+FFFD Substitution of Maximal Subparts" from Chapter 3 of the Unicode standard, and as specified by https://encoding.spec.whatwg.org/#utf-8-decoder
ie.
C:\zig\current\lib\std\debug.zig:726:23: error: no field or member function named 'getDwarfInfoForAddress' in 'dwarf.DwarfInfo'
if (try module.getDwarfInfoForAddress(unwind_state.debug_info.allocator, unwind_state.dwarf_context.pc)) |di| {
~~~~~~^~~~~~~~~~~~~~~~~~~~~~~
C:\zig\current\lib\std\dwarf.zig:663:23: note: struct declared here
pub const DwarfInfo = struct {
^~~~~~
referenced by:
next_internal: C:\zig\current\lib\std\debug.zig:737:29
next: C:\zig\current\lib\std\debug.zig:654:31
remaining reference traces hidden; use '-freference-trace' to see all reference traces
C:\zig\current\lib\std\debug.zig:970:31: error: no field or member function named 'getSymbolAtAddress' in 'dwarf.DwarfInfo'
const symbol_info = module.getSymbolAtAddress(debug_info.allocator, address) catch |err| switch (err) {
~~~~~~^~~~~~~~~~~~~~~~~~~
C:\zig\current\lib\std\dwarf.zig:663:23: note: struct declared here
pub const DwarfInfo = struct {
Found by fuzzing. Previous numeric function assumed that is is getting
buffer of size 12, mode is size 8. Fuzzing found overflow.
Fixing and adding test cases.
The HTTP specification does not provide a way to escape \r\n in headers,
so it's the API user's responsibility to ensure the header names and
values do not contain \r\n. Also header names must not contain ':'.
It's an assertion, not an error, because the calling code very likely is
using hard-coded values or server-provided values that do not need to be
checked, and the error would be unreachable anyway.
Untrusted user input must not be put directly into into HTTP headers.
The API automatically handles these requests as expected. After
receiveHead(), the server has a chance to notice the expectation and do
something about it. If it does not, then the Server implementation will
handle it by sending the continuation header when the read stream is
created.
Both respond() and respondStreaming() send the continuation header as
part of discarding the request body, only if the read stream has not
already been created.
Before I mistakenly thought that missing content-length meant zero when
it actually means to stream until the connection is closed.
Now the respond() function accepts transfer_encoding which can be left
as default (use content.len for content-length), set to none which makes
it omit the content-length, or chunked, which makes it format the
response as a chunked transfer even though the server has the entire
contents already buffered.
The echo-content tests are moved from test/standalone/http.zig to the
standard library where they are actually run.
* Uncouple std.http.ChunkParser from protocol.zig
* Fix receiveHead not passing leftover buffer through the header parser.
* Fix content-length read streaming
This implementation handles the final chunk length correctly rather than
"hoping" that the buffer already contains \r\n.
In a previous commit I removed a load-bearing use of `@hasDecl` to
detect whether the SO.REUSEPORT option should be set. `@hasDecl` should
not be used for OS feature detection because it can hide bugs.
The new logic checks for the operating system specifically and then does
the thing that is supposed to be done on that operating system directly.
Mainly, this removes the poorly named `wait`, `send`, `finish`
functions, which all operated on the same "Response" object, which was
actually being used as the request.
Now, it looks like this:
1. std.net.Server.accept() gives you a std.net.Server.Connection
2. std.http.Server.init() with the connection
3. Server.receiveHead() gives you a Request
4. Request.reader() gives you a body reader
5. Request.respond() is a one-shot, or Request.respondStreaming() creates
a Response
6. Response.writer() gives you a body writer
7. Response.end() finishes the response; Response.endChunked() allows
passing response trailers.
In other words, the type system now guides the API user down the correct
path.
receiveHead allows extra bytes to be read into the read buffer, and then
will reuse those bytes for the body or the next request upon connection
reuse.
respond(), the one-shot function, will send the entire response in one
syscall.
Streaming response bodies no longer wastefully wraps every call to write
with a chunk header and trailer; instead it only sends the HTTP chunk
wrapper when flushing. This means the user can still control when it
happens but it also does not add unnecessary chunks.
Empirically, in my example project that uses this API, the usage code is
significantly less noisy, it has less error handling while handling
errors more correctly, it's more obvious what is happening, and it is
syscall-optimal.
Additionally:
* Uncouple std.http.HeadParser from protocol.zig
* Delete std.Server.Connection; use std.net.Server.Connection instead.
- The API user supplies the read buffer when initializing the
http.Server, and it is used for the HTTP head as well as a buffer
for reading the body into.
* Replace and document the State enum. No longer is there both "start"
and "first".
Before, this code constructed an arena allocator and then used it when
handling redirects.
You know what's better than having threads fight over an allocator?
Avoiding dynamic memory allocation in the first place.
This commit reuses the http headers static buffer for handling
redirects. The new location is copied to the beginning of the static
header buffer and then the subsequent request uses a subslice of that
buffer.
* "storage" is a better name than "strategy".
* The most flexible memory-based storage API is appending to an
ArrayList.
* HTTP method should default to POST if there is a payload.
* Avoid storing unnecessary data in the FetchResult
* Avoid the need for a deinit() method in the FetchResult
The decisions that this logic made about how to handle files is beyond
repair:
- fail to use sendfile() on a plain connection
- redundant stat
- does not handle arbitrary streams
So, file-based response storage is no longer supported. Users should use
the lower-level open() API which allows avoiding these pitfalls.
* add API for iterating over custom HTTP headers
* remove `trailing` flag from std.http.Client.parse. Instead, simply
don't call parse() for trailers.
* fix the logic inside that parse() function. it was using wrong std.mem
functions, ignoring malformed data, and returned errors on dead
branches.
* simplify logic inside wait()
* fix HeadersParser not dropping the 2 read bytes of \r\n after a
chunked transfer
* move the trailers test to be a std lib unit test and make it pass
This reverts commit 42be972a72c86b32ad8403d082ab42763c6facec.
Using a bit to distinguish between headers and trailers is fine. It was
just named and documented poorly.
I originally removed these in 402f967ed5.
I allowed them to be added back in #15299 because they were smuggled in
alongside a bug fix, however, I wasn't kidding when I said that I wanted
to take the design of std.http in a different direction than using this
data structure.
Instead, some headers are provided via explicit field names populated
while parsing the HTTP request/response, and some are provided via
new fields that support passing extra, arbitrary headers.
This resulted in simplification of logic in many places, as well as
elimination of the possibility of failure in many places. There is
less deinitialization code happening now.
Furthermore, it made it no longer necessary to clone the headers data
structure in order to handle redirects.
http_proxy and https_proxy fields are now pointers since it is common
for them to be unpopulated.
loadDefaultProxies is changed into initDefaultProxies to communicate
that it does not actually load anything from disk or from the network.
The function now is leaky; the API user must pass an already
instantiated arena allocator. Removes the need to deinitialize proxies.
Before, proxies stored arbitrary sets of headers. Now they only store
the authorization value.
Removed the duplicated code between https_proxy and http_proxy. Finally,
parsing failures of the environment variables result in errors being
emitted rather than silently ignoring the proxy.
error.CompressionNotSupported is renamed to
error.CompressionUnsupported, matching the naming convention from all
the other errors in the same set.
Removed documentation comments that were redundant with field and type
names.
Disabling zstd decompression in the server for now; see #18937.
I found some apparently dead code in src/Package/Fetch/git.zig. I want
to check with Ian about this.
I discovered that test/standalone/http.zig is dead code, it is only
being compiled but not being run. Furthermore it hangs at the end if you
run it manually. The previous commits in this branch were written under
the assumption that this test was being run with
`zig build test-standalone`.
This is a state machine that already has a `state` field. No need to
additionally store "done" - it just makes things unnecessarily
complicated and buggy.
The buffer for HTTP headers is now always provided via a static buffer.
As a consequence, OutOfMemory is no longer a member of the read() error
set, and the API and implementation of Client and Server are simplified.
error.HttpHeadersExceededSizeLimit is renamed to
error.HttpHeadersOversize.
Running fuzzing tar test with [zig std lib
fuzzing](https://github.com/squeek502/zig-std-lib-fuzzing) reached and
assert in tar implementation. Assert (in std lib) should not be
reachable by external input, so I'm fixing this to return error.
If an adapted string key with embedded nulls was put in a hash map with
`std.hash_map.StringIndexAdapter`, then an incorrect hash would be
entered for that entry such that it is possible that when looking for
the exact key that matches the prefix of the original key up to the
first null would sometimes match this entry due to hash collisions and
sometimes not if performed later after a grow + rehash, causing the same
key to exist with two different indices breaking every string equality
comparison ever, for example claiming that a container type doesn't
contain a field because the field name string in the struct and the
string representing the identifier to lookup might be equal strings but
have different string indices. This could maybe be fixed by changing
`std.hash_map.StringIndexAdapter.hash` to only hash up to the first
null, therefore ensuring that the entry's hash is correct and that all
future lookups will be consistent, but I don't trust anything so instead
I assert that there are no embedded nulls.
BufferedTee provides reader interface to the consumer. Data read by consumer
is also written to the output. Output is hold lookahead_size bytes behind
consumer. Allowing consumer to put back some bytes to be read again. On flush
all consumed bytes are flushed to the output.
input -> tee -> consumer
|
output
input - underlying unbuffered reader
output - writer, receives data read by consumer
consumer - uses provided reader interface
If lookahead_size is zero output always has same bytes as consumer.
This commit works around #18967 by adding an `AccumulatingReader`, which
accumulates data read from the underlying packfile, and by keeping track
of the position in the packfile and hash/checksum information separately
rather than using reader composition. That is, the packfile position and
hashes/checksums are updated with the accumulated read history data only
after we can determine what data has actually been used by the
decompressor rather than merely being buffered.
The only addition to the standard library APIs to support this change is
the `unreadBytes` function in `std.compress.flate.Inflate`, which allows
the user to determine how many bytes have been read only for buffering
and not used as part of compressed data.
These changes can be reverted if #18967 is resolved with a decompressor
that reads precisely only the number of bytes needed for decompression.
This code is run when printing a stack trace in a debug executable, so
it has to be fast even without compiler optimizations.
Adding a `@panic` to the top of `main` and running an x86_64 backend
compiled compiler goes from `1m32.773s` to `0m3.232s`.
Until now literal and distance code lengths where treated as two
different arrays. But according to rfc they can overlap:
The code length repeat codes can cross from HLIT + 257 to the
HDIST + 1 code lengths. In other words, all code lengths form
a single sequence of HLIT + HDIST + 258 values.
Now code lengths are decoded in single array which is then split
to literal and distance part.
It was usefull during development.
From andrewrk code review comment:
In fact, Zig does not guarantee the @sizeOf structs, and so these tests are not valid.
Encountered in a recent CI run on an aarch64-windows dev kit.
Pretty sure I disabled the virus scanner but it looks like it turned
itself back on with a Windows Update.
Rather than marking the new error code as unreachable in the places
where it is unexpected, this commit makes it return `error.Unexpected`.
Zig deflate compression/decompression implementation. It supports compression and decompression of gzip, zlib and raw deflate format.
Fixes#18062.
This PR replaces current compress/gzip and compress/zlib packages. Deflate package is renamed to flate. Flate is common name for deflate/inflate where deflate is compression and inflate decompression.
There are breaking change. Methods signatures are changed because of removal of the allocator, and I also unified API for all three namespaces (flate, gzip, zlib).
Currently I put old packages under v1 namespace they are still available as compress/v1/gzip, compress/v1/zlib, compress/v1/deflate. Idea is to give users of the current API little time to postpone analyzing what they had to change. Although that rises question when it is safe to remove that v1 namespace.
Here is current API in the compress package:
```Zig
// deflate
fn compressor(allocator, writer, options) !Compressor(@TypeOf(writer))
fn Compressor(comptime WriterType) type
fn decompressor(allocator, reader, null) !Decompressor(@TypeOf(reader))
fn Decompressor(comptime ReaderType: type) type
// gzip
fn compress(allocator, writer, options) !Compress(@TypeOf(writer))
fn Compress(comptime WriterType: type) type
fn decompress(allocator, reader) !Decompress(@TypeOf(reader))
fn Decompress(comptime ReaderType: type) type
// zlib
fn compressStream(allocator, writer, options) !CompressStream(@TypeOf(writer))
fn CompressStream(comptime WriterType: type) type
fn decompressStream(allocator, reader) !DecompressStream(@TypeOf(reader))
fn DecompressStream(comptime ReaderType: type) type
// xz
fn decompress(allocator: Allocator, reader: anytype) !Decompress(@TypeOf(reader))
fn Decompress(comptime ReaderType: type) type
// lzma
fn decompress(allocator, reader) !Decompress(@TypeOf(reader))
fn Decompress(comptime ReaderType: type) type
// lzma2
fn decompress(allocator, reader, writer !void
// zstandard:
fn DecompressStream(ReaderType, options) type
fn decompressStream(allocator, reader) DecompressStream(@TypeOf(reader), .{})
struct decompress
```
The proposed naming convention:
- Compressor/Decompressor for functions which return type, like Reader/Writer/GeneralPurposeAllocator
- compressor/compressor for functions which are initializers for that type, like reader/writer/allocator
- compress/decompress for one shot operations, accepts reader/writer pair, like read/write/alloc
```Zig
/// Compress from reader and write compressed data to the writer.
fn compress(reader: anytype, writer: anytype, options: Options) !void
/// Create Compressor which outputs the writer.
fn compressor(writer: anytype, options: Options) !Compressor(@TypeOf(writer))
/// Compressor type
fn Compressor(comptime WriterType: type) type
/// Decompress from reader and write plain data to the writer.
fn decompress(reader: anytype, writer: anytype) !void
/// Create Decompressor which reads from reader.
fn decompressor(reader: anytype) Decompressor(@TypeOf(reader)
/// Decompressor type
fn Decompressor(comptime ReaderType: type) type
```
Comparing this implementation with the one we currently have in Zig's standard library (std).
Std is roughly 1.2-1.4 times slower in decompression, and 1.1-1.2 times slower in compression. Compressed sizes are pretty much same in both cases.
More resutls in [this](https://github.com/ianic/flate) repo.
This library uses static allocations for all structures, doesn't require allocator. That makes sense especially for deflate where all structures, internal buffers are allocated to the full size. Little less for inflate where we std version uses less memory by not preallocating to theoretical max size array which are usually not fully used.
For deflate this library allocates 395K while std 779K.
For inflate this library allocates 74.5K while std around 36K.
Inflate difference is because we here use 64K history instead of 32K in std.
If merged existing usage of compress gzip/zlib/deflate need some changes. Here is example with necessary changes in comments:
```Zig
const std = @import("std");
// To get this file:
// wget -nc -O war_and_peace.txt https://www.gutenberg.org/ebooks/2600.txt.utf-8
const data = @embedFile("war_and_peace.txt");
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer std.debug.assert(gpa.deinit() == .ok);
const allocator = gpa.allocator();
try oldDeflate(allocator);
try new(std.compress.flate, allocator);
try oldZlib(allocator);
try new(std.compress.zlib, allocator);
try oldGzip(allocator);
try new(std.compress.gzip, allocator);
}
pub fn new(comptime pkg: type, allocator: std.mem.Allocator) !void {
var buf = std.ArrayList(u8).init(allocator);
defer buf.deinit();
// Compressor
var cmp = try pkg.compressor(buf.writer(), .{});
_ = try cmp.write(data);
try cmp.finish();
var fbs = std.io.fixedBufferStream(buf.items);
// Decompressor
var dcp = pkg.decompressor(fbs.reader());
const plain = try dcp.reader().readAllAlloc(allocator, std.math.maxInt(usize));
defer allocator.free(plain);
try std.testing.expectEqualSlices(u8, data, plain);
}
pub fn oldDeflate(allocator: std.mem.Allocator) !void {
const deflate = std.compress.v1.deflate;
// Compressor
var buf = std.ArrayList(u8).init(allocator);
defer buf.deinit();
// Remove allocator
// Rename deflate -> flate
var cmp = try deflate.compressor(allocator, buf.writer(), .{});
_ = try cmp.write(data);
try cmp.close(); // Rename to finish
cmp.deinit(); // Remove
// Decompressor
var fbs = std.io.fixedBufferStream(buf.items);
// Remove allocator and last param
// Rename deflate -> flate
// Remove try
var dcp = try deflate.decompressor(allocator, fbs.reader(), null);
defer dcp.deinit(); // Remove
const plain = try dcp.reader().readAllAlloc(allocator, std.math.maxInt(usize));
defer allocator.free(plain);
try std.testing.expectEqualSlices(u8, data, plain);
}
pub fn oldZlib(allocator: std.mem.Allocator) !void {
const zlib = std.compress.v1.zlib;
var buf = std.ArrayList(u8).init(allocator);
defer buf.deinit();
// Compressor
// Rename compressStream => compressor
// Remove allocator
var cmp = try zlib.compressStream(allocator, buf.writer(), .{});
_ = try cmp.write(data);
try cmp.finish();
cmp.deinit(); // Remove
var fbs = std.io.fixedBufferStream(buf.items);
// Decompressor
// decompressStream => decompressor
// Remove allocator
// Remove try
var dcp = try zlib.decompressStream(allocator, fbs.reader());
defer dcp.deinit(); // Remove
const plain = try dcp.reader().readAllAlloc(allocator, std.math.maxInt(usize));
defer allocator.free(plain);
try std.testing.expectEqualSlices(u8, data, plain);
}
pub fn oldGzip(allocator: std.mem.Allocator) !void {
const gzip = std.compress.v1.gzip;
var buf = std.ArrayList(u8).init(allocator);
defer buf.deinit();
// Compressor
// Rename compress => compressor
// Remove allocator
var cmp = try gzip.compress(allocator, buf.writer(), .{});
_ = try cmp.write(data);
try cmp.close(); // Rename to finisho
cmp.deinit(); // Remove
var fbs = std.io.fixedBufferStream(buf.items);
// Decompressor
// Rename decompress => decompressor
// Remove allocator
// Remove try
var dcp = try gzip.decompress(allocator, fbs.reader());
defer dcp.deinit(); // Remove
const plain = try dcp.reader().readAllAlloc(allocator, std.math.maxInt(usize));
defer allocator.free(plain);
try std.testing.expectEqualSlices(u8, data, plain);
}
```
* Add `timedWait` to `std.Thread.Semaphore`
Add example to documentation of `std.Thread.Semaphore`
* Add unit test for thread semaphore timed wait
Fix missing try
* Change unit test to be simpler
* Change `timedWait()` to keep a deadline
* Change `timedWait()` to return earlier in some scenarios
* Change `timedWait()` to keep a deadline (based on std.Timer)
(similar to std.Thread.Futex)
---------
Co-authored-by: protty <45520026+kprotty@users.noreply.github.com>
* std.c: consolidate some definitions, making them share code. For
example, freebsd, dragonfly, and openbsd can all share the same
`pthread_mutex_t` definition.
* add type safety to std.c.O
- this caught a bug where mode flags were incorrectly passed as the
open flags.
* 3 fewer uses of usingnamespace keyword
* as per convention, remove purposeless field prefixes from struct field
names even if they have those prefixes in the corresponding C code.
* fix incorrect wasi libc Stat definition
* remove C definitions from incorrectly being in std.os.wasi
* make std.os.wasi definitions type safe
* go through wasi native APIs even when linking libc because the libc
APIs are problematic and wasteful
* don't expose WASI definitions in std.posix
* remove std.os.wasi.rights_t.ALL: this is a footgun. should it be all
future rights too? or only all current rights known? both are
the wrong answer.