I pointed a fuzzer at the tokenizer and it crashed immediately. Upon
inspection, I was dissatisfied with the implementation. This commit
removes several mechanisms:
* Removes the "invalid byte" compile error note.
* Dramatically simplifies tokenizer recovery by making recovery always
occur at newlines, and never otherwise.
* Removes UTF-8 validation.
* Moves some character validation logic to `std.zig.parseCharLiteral`.
Removing UTF-8 validation is a regression of #663, however, the existing
implementation was already buggy. When adding this functionality back,
it must be fuzz-tested while checking the property that it matches an
independent Unicode validation implementation on the same file. While
we're at it, fuzzing should check the other properties of that proposal,
such as no ASCII control characters existing inside the source code.
Other changes included in this commit:
* Deprecate `std.unicode.utf8Decode` and its WTF-8 counterpart. This
function has an awkward API that is too easy to misuse.
* Make `utf8Decode2` and friends use arrays as parameters, eliminating a
runtime assertion in favor of using the type system.
After this commit, the crash found by fuzzing, which was
"\x07\xd5\x80\xc3=o\xda|a\xfc{\x9a\xec\x91\xdf\x0f\\\x1a^\xbe;\x8c\xbf\xee\xea"
no longer causes a crash. However, I did not feel the need to add this
test case because the simplified logic eradicates most crashes of this
nature.
This is a fairly small hobby OS that has not seen development in 2 years. Our
current policy is that hobby OSs should use the `other` tag.
https://github.com/zhmu/ananas
What is `sparcel`, you might ask? Good question!
If you take a peek in the SPARC v8 manual, §2.2, it is quite explicit that SPARC
v8 is a big-endian architecture. No little-endian or mixed-endian support to be
found here.
On the other hand, the SPARC v9 manual, in §3.2.1.2, states that it has support
for mixed-endian operation, with big-endian mode being the default.
Ok, so `sparcel` must just be referring to SPARC v9 running in little-endian
mode, surely?
Nope:
* 40b4fd7a3e/llvm/lib/Target/Sparc/SparcTargetMachine.cpp (L226)
* 40b4fd7a3e/llvm/lib/Target/Sparc/SparcTargetMachine.cpp (L104)
So, `sparcel` in LLVM is referring to some sort of fantastical little-endian
SPARC v8 architecture. I've scoured the internet and I can find absolutely no
evidence that such a thing exists or has ever existed. In fact, I can find no
evidence that a little-endian implementation of SPARC v9 ever existed, either.
Or any SPARC version, actually!
The support was added here: https://reviews.llvm.org/D8741
Notably, there is no mention whatsoever of what CPU this might be referring to,
and no justification given for the "but some are little" comment added in the
patch.
My best guess is that this might have been some private exercise in creating a
little-endian version of SPARC that never saw the light of day. Given that SPARC
v8 explicitly doesn't support little-endian operation (let alone little-endian
instruction encoding!), and no CPU is known to be implemented as such, I think
it's very reasonable for us to just remove this support.
* Elaborate on the sub-variants of Variant I.
* Clarify the use of the TCB term.
* Rename a bunch of stuff to be more accurate/descriptive.
* Follow Zig's style around namespacing more.
* Use a structure for the ABI TCB.
No functional change intended.
The code would cause LLVM to emit a jump table for the switch in the loop over
the dynamic tags. That jump table was far enough away that the compiler decided
to go through the GOT, which would of course break at this early stage as we
haven't applied MIPS's local GOT relocations yet, nor can we until we've walked
through the _DYNAMIC array.
The first attempt at rewriting this used code like this:
var sorted_dynv = [_]elf.Addr{0} ** elf.DT_NUM;
But this is also problematic as it results in a memcpy() call. Instead, we
explicitly initialize it to undefined and use a loop of volatile stores to
clear it.
In https://github.com/ziglang/zig/pull/19641, this binding changed from `[*]u16` to `LPWSTR` which made it a sentinel-terminated pointer. This introduced a compiler error in the `std.os.windows.GetModuleFileNameW` wrapper since it takes a `[*]u16` pointer. This commit changes the binding back to what it was before instead of introducing a breaking change to `std.os.windows.GetModuleFileNameW`
Related: https://github.com/ziglang/zig/issues/20858
This does not completely ignore static asserts - they are validated by aro
during parsing; any failures will render an error and non-zero exit code.
Emit a warning comment that _Static_asserts are not translated - this
matches the behavior of the existing clang-based translate-c.
Aro currently does not store source locations for _Static_assert
declarations so I've hard-coded token index 0 for now.
Accesses to this global variable can require relocations on some platforms (e.g.
MIPS). If we do it before PIE relocations have been applied, we'll crash.
It's actually important for the ABI that r25 (t9) contains the address of the
called function, so that this standard prologue sequence works:
lui $2, %hi(_gp_disp)
addiu $2, $2, %lo(_gp_disp)
addu $gp, $2, $t9
(This is a bit similar to the ToC situation on powerpc that was fixed in
7bc78967b400322a0fc5651f37a1b0428c37fb9d.)
statx() is strictly superior to stat() and friends. We can do this because the
standard library declares Linux 4.19 to be the minimum version supported in
std.Target. This is also necessary on riscv32 where there is only statx().
While here, I improved std.fs.File.metadata() to gather as much information as
possible when calling statx() since that is the expectation from this particular
API.
This is kind of a hack because the timespec in UAPI headers is actually still
32-bit while __kernel_timespec is 64-bit. But, importantly, all the syscalls
take __kernel_timespec from the get-go (because riscv32 support is so recent).
Defining our timespec this way will allow all the syscall wrappers in
std.os.linux to do the right thing for riscv32. For other 32-bit architectures,
we have to use the 64-bit time syscalls explicitly to solve the Y2038 problem.
loongarch64 syscalls not updated because it seems like that kernel port hasn't
been working for a year or so:
In file included from arch/loongarch/include/uapi/asm/unistd.h:5:
include/uapi/asm-generic/unistd.h:2:10: fatal error: 'asm/bitsperlong.h' file not found
That file is just missing from the tree. 🤷
Deprecates std.fs.atomicSymLink and removes the allocator requirement
from the new std.fs.Dir.atomicSymLink. Replaces the two usages of this
within std.
I did not include the TODOs from the original code that were based
off of `switch (err) { ..., else => return err }` not having correct
inference that cases handled in `...` are impossible in the error
union return type because these are not specified in many places but
I can add them back if wanted.
Thank you @squeek502 for help with fixing buffer overflows!
The core functionalities are now in two general functions
`extremeInSubtreeOnDirection()` and `nextOnDirection()` so all the other
traversing functions (`getMin()`, `getMax()`, and `InorderIterator`) are
all just trivial calls to these core functions.
The added two functions `Node.next()` and `Node.prev()` are also just
trivial calls to these.
* std.Treap traversal direction: use u1 instead of usize.
* Treap: fix getMin() and getMax(), and add tests for them.
This is a misfeature that we inherited from LLVM:
* https://reviews.llvm.org/D61259
* https://reviews.llvm.org/D61939
(`aarch64_32` and `arm64_32` are equivalent.)
I truly have no idea why this triple passed review in LLVM. It is, to date, the
*only* tag in the architecture component that is not, in fact, an architecture.
In reality, it is just an ILP32 ABI for AArch64 (*not* AArch32).
The triples that use `aarch64_32` look like `aarch64_32-apple-watchos`. Yes,
that triple is exactly what you think; it has no ABI component. They really,
seriously did this.
Since only Apple could come up with silliness like this, it should come as no
surprise that no one else uses `aarch64_32`. Later on, a GNU ILP32 ABI for
AArch64 was developed, and support was added to LLVM:
* https://reviews.llvm.org/D94143
* https://reviews.llvm.org/D104931
Here, sanity seems to have prevailed, and a triple using this ABI looks like
`aarch64-linux-gnu_ilp32` as you would expect.
As can be seen from the diffs in this commit, there was plenty of confusion
throughout the Zig codebase about what exactly `aarch64_32` was. So let's just
remove it. In its place, we'll use `aarch64-watchos-ilp32`,
`aarch64-linux-gnuilp32`, and so on. We'll then translate these appropriately
when talking to LLVM. Hence, this commit adds the `ilp32` ABI tag (we already
have `gnuilp32`).
Contributes to #15607
Although the case is not handled in `openatWasi` (as I could not get a
working wasi environment to test the change) I have added a FIXME
addressing it and linking to the issue.