This passes -Wl,-no-pie linker arg. Golang uses that. From the `ld(1)`
man page:
Create a position dependent executable. This is the default.
Not adding to the help text, because this is the default.
ECDSA is the most commonly used signature scheme today, mainly for
historical and conformance reasons. It is a necessary evil for
many standard protocols such as TLS and JWT.
It is tricky to implement securely and has been the root cause of
multiple security disasters, from the Playstation 3 hack to multiple
critical issues in OpenSSL and Java.
This implementation combines lessons learned from the past with
recent recommendations.
In Zig, the NIST curves that ECDSA is almost always instantied with
use formally verified field arithmetic, giving us peace of mind
even on edge cases. And the API rejects neutral elements where it
matters, and unconditionally checks for non-canonical encoding for
scalars and group elements. This automatically eliminates common
vulnerabilities such as https://sk.tl/2LpS695v .
ECDSA's security heavily relies on the security of the random number
generator, which is a concern in some environments.
This implementation mitigates this by computing deterministic
nonces using the conservative scheme from Pornin et al. with the
optional addition of randomness as proposed in Ericsson's
"Deterministic ECDSA and EdDSA Signatures with Additional Randomness"
document. This approach mitigates both the implications of a weak RNG
and the practical implications of fault attacks.
Project Wycheproof is a Google project to test crypto libraries against
known attacks by triggering edge cases. It discovered vulnerabilities
in virtually all major ECDSA implementations.
The entire set of ECDSA-P256-SHA256 test vectors from Project Wycheproof
is included here. Zero defects were found in this implementation.
The public API differs from the Ed25519 one. Instead of raw byte strings
for keys and signatures, we introduce Signature, PublicKey and SecretKey
structures.
The reason is that a raw byte representation would not be optimal.
There are multiple standard representations for keys and signatures,
and decoding/encoding them may not be cheap (field elements have to be
converted from/to the montgomery domain).
So, the intent is to eventually move ed25519 to the same API, which
is not going to introduce any performance regression, but will bring
us a consistent API, that we can also reuse for RSA.
Instead of always using std.testing.allocator, the test harness now follows
the same logic as self-hosted for choosing an allocator - that is - it
uses C allocator when linking libc, std.testing.allocator otherwise, and
respects `-Dforce-gpa` to override the decision. I did this because
I found GeneralPurposeAllocator to be prohibitively slow when doing
multi-threading, even in the context of a debug build.
There is now a second thread pool which is used to spawn each
test case. The stage2 tests are passed the first thread pool. If it were
only multi-threading the stage1 tests then we could use the same thread
pool for everything. However, the problem with this strategy with stage2
is that stage2 wants to spawn tasks and then call wait() on the main
thread. If we use the same thread pool for everything, we get a deadlock
because all the threads end up all hanging at wait() and nothing is
getting done. So we use our second thread pool to simulate a "process pool"
of sorts.
I spent most of the time working on this commit scratching my head trying
to figure out why I was getting ETXTBSY when spawning the test cases.
Turns out it's a fundamental Unix design flaw, already a known, unsolved
issue by Go and Java maintainers:
https://github.com/golang/go/issues/22315https://bugs.openjdk.org/browse/JDK-8068370
With this change, the following command, executed on my laptop, went from
6m24s to 1m44s:
```
stage1/bin/zig build test-cases -fqemu -fwasmtime -Denable-llvm
```
closes#11818
This comment is now deleted because the task is completed in this
commit:
```
// TODO: Update this to behave like `beginComptimePtrLoad` and properly check/use
// `container_ty` and `array_ty`, instead of trusting that the parent decl type
// matches the type used to derive the elem_ptr/field_ptr/etc.
//
// This is needed because the types will not match if the pointer we're mutating
// through is reinterpreting comptime memory.
```
The main strategy is to change the ComptimePtrMutationKit struct so that
instead of `val: *Value` it now returns a tagged union which can be one
of three possibilities:
* The pointer type matches the actual comptime Value so a direct
modification is possible. Before this commit, the implementation
incorrectly assumed this was always the case.
* In the case of needing to write through a reinterpreted pointer, a
mutable base Value pointer is provided along with a byte offset
pointing to the element value in virtual memory.
* Otherwise, it means a compile error must be emitted because one or
both of the types (the owner of the value, or the pointer type being
used to write through) do not have a well-defined memory layout.
After calling beginComptimePtrMutation, the one callsite now switches on
this tagged union and does the appropriate thing. The main new logic is
for the second case, which involves pointer reinterpretation, which now
takes this strategy:
1. write the base value to a memory buffer.
2. perform the pointer store at the proper byte offset, thereby
modifying a subset of the buffer.
3. read the base value from the memory buffer, overwriting the old base
value.
This commit does not change any behavior, but changes the type of
the runtime_index field from u32 to a non-exhaustive enum. This allows
us to put `std.math.maxInt(u32)` only in the enum type definition and
give it an official meaning.
Rather than storing all the shifts in temporaries, we perform the correct
shifting without temporaries. This makes the runtime code more performant
and also the backend code is simplified as we have a singular abstraction.
This does however not support floats of bitsizes
different than 32 or 64. f16, f80, f126 will require
support for compiler-rt and are out-of-scope for this commit.
Signed integers are currently not supported either.
llvm: dump failed module when -femit-llvm-ir set
print_air:
* print fully qualified name
* use Type.fmt and Value.fmtValue, fmtDebug is useless
TypedValue
* handle anon structs and tuples
* fix bugs
This comptime gate is needed to make sure that purely single-threaded
programs don't generate calls to the std.Thread API.
WASI targets successfully build again with this change.
The function `writeDbgInfoNopsBuffered` was based on the function
`pwriteDbgInfoNops`, originally written by me, and then modified to
write to a memory buffer instead of an open file. When writing to a
file, any extra bytes beyond the end of the file extend the size of
the file, and the function body of `pwriteDbgInfoNops` takes advantage
of this when `next_padding_bytes` causes the write to go beyond the
end of the file. However, when writing to a memory buffer, the
underlying array list must be expanded if the write would cause the
buffer to expand.