In stage1, this behavior was allowed (by accident?) and also
accidentally exercised by the behavior test changed in this commit. In
discussion on Discord, Andrew decided this should not be allowed in
stage2 since there is currently on real world reason to allow this
strange edge case.
I've added the compiler test to solidify that this behavior should NOT
occur and updated the behavior test to the new valid semantics.
* use the real start code for LLVM backend with x86_64-linux
- there is still a check for zig_backend after initializing the TLS
area to skip some stuff.
* introduce new AIR instructions and implement them for the LLVM
backend. They are the same as `call` except with a modifier.
- call_always_tail
- call_never_tail
- call_never_inline
* LLVM backend calls hasRuntimeBitsIgnoringComptime in more places to
avoid unnecessarily depending on comptimeOnly being resolved for some
types.
* LLVM backend: remove duplicate code for setting linkage and value
name. The canonical place for this is in `updateDeclExports`.
* LLVM backend: do some assembly template massaging to make `%%`
rendered as `%`. More hacks will be needed to make inline assembly
catch up with stage1.
This implements type equality for error sets. This is done
through element-wise error set comparison.
Inferred error sets are always distinct types and other error sets are
always sorted. See #11022.
For some errors if the found token is not on the same line as
the previous token, point to the end of the previous token.
This usually results in more helpful errors.
This updates the test runner for stage2 to emit to stdout with the passed, skipped and failed tests
similar to the LLVM backend.
Another change to this is the start function, as it's now more in line with stage1's.
The stage2 test infrastructure for wasm/wasi has been updated to reflect this as well.
* link: add a virtual function `lowerUnnamedConsts`, similar to
`updateFunc` or `updateDecl` which needs to be implemented by the
linker backend in order to be used with the `CodeGen` code
* elf: implement `lowerUnnamedConsts` specialization where we
lower unnamed constants to `.rodata` section. We keep track of the
atoms encompassing the lowered unnamed consts in a global table
indexed by parent `Decl`. When the `Decl` is updated or destroyed,
we clear the unnamed consts referenced within the `Decl`.
* macho: implement `lowerUnnamedConsts` specialization where we
lower unnamed constants to `__TEXT,__const` section. We keep track of the
atoms encompassing the lowered unnamed consts in a global table
indexed by parent `Decl`. When the `Decl` is updated or destroyed,
we clear the unnamed consts referenced within the `Decl`.
* x64: change `MCValue.linker_sym_index` into two `MCValue`s: `.got_load` and
`.direct_load`. The former signifies to the emitter that it should
emit a GOT load relocation, while the latter that it should emit
a direct load (`SIGNED`) relocation.
* x64: lower `struct` instantiations
AstGen:
* rename the known_has_bits flag to known_non_opv to make it better
reflect what it actually means.
* add a known_comptime_only flag.
* make the flags take advantage of identifiers of primitives and the
fact that zig has no shadowing.
* correct the known_non_opv flag for function bodies.
Sema:
* Rename `hasCodeGenBits` to `hasRuntimeBits` to better reflect what it
does.
- This function got a bit more complicated in this commit because of
the duality of function bodies: on one hand they have runtime bits,
but on the other hand they require being comptime known.
* WipAnonDecl now takes a LazySrcDecl parameter and performs the type
resolutions that it needs during finish().
* Implement comptime `@ptrToInt`.
Codegen:
* Improved handling of lowering decl_ref; make it work for
comptime-known ptr-to-int values.
- This same change had to be made many different times; perhaps we
should look into merging the implementations of `genTypedValue`
across x86, arm, aarch64, and riscv.
* stage2: put decls in different MachO sections
Use `getDeclVAddrWithReloc` when targeting MachO backend rather than
`getDeclVAddr` - this fn returns a zero vaddr and instead creates a
relocation on the linker side which will get automatically updated
whenever the target decl is moved in memory. This fn also records
a rebase of the target pointer so that its value is correctly slid
in presence of ASLR.
This commit enables `zig test` on x86_64-macos.
* stage2: fix output section selection for type,val pairs
* load address (pointer) to a stack variable in a register via
`lea` instruction
* store value on the stack via a pointer stored in a register via
`mov [reg], imm` instruction
* the lowerings naturally are handled automatically by Mir -> Isel
layer
* add initial (without safety) implementation of `.optional_payload`
* add matching stage2 test cases
* fix handling of `ah`, `bh`, `ch`, and `dh` registers (which are
actually used as aliases to `dil`, etc. registers). Currenly, we
treat them as aliases only meaning when we encounter `ah` we make
sure to set the REX.W to promote the instruction to 64bits and use
`dil` register instead - otherwise we might have mismatch between
registers used in different parts of the codegen. In the future,
we can and should use `ah`, etc. as upper 8bit halves of 16bit
registers `ax`, etc.
* fix bug in `airCmp` where `.cmp` MIR instruction shouldn't force
type `Bool` but let the type of the original type propagate downwards
- we need this to make an informed choice of the target register
size and hence choose the right encoding down the line.
* implement lowering of 1-byte and 2-byte values to stack and add
matching stage2 tests for x86_64 codegen