Achieve this by reducing the amount of special casing to handle EOF so
that the already correct logic for normal comments does not need to be
duplicated.
After #35 is implemented,
we should be able to recover from this *at any indentation level*,
reporting a parse error and yet also parsing all the decls even
inside structs. Until then, I don't want to add any hacks to make
this work.
I don't understand the idea here of this kind of recovery. If we
want to resurrect this test case we need some comments on it to explain
the purpose, example use cases, expected behavior, etc.
* Added support for passing write file args as build options
* Fix missing fmtEscapes and unused format
* Actually fixed now, must be formatted
* remove addPathBuildOption
The z/Z format specifiers were merged last October (4 months ago). They were then deprecated in January (just over a month ago). This PR removes them altogether.
All stage2 tests are passing again in this branch.
Remaining checklist for this branch:
* get the rest of the zig fmt test cases passing
- re-enable the translate-c test case that is blocking on this
* implement the 2 `@panic(TODO)`'s in parse.zig
* use fn_proto not fn_decl for extern function declarations
Allowing same line doc comments causes some ambiguity as to how
generated docs should represent the case in which both same line
and preceding line doc comments are present:
/// preceding line
const foobar = 42; /// same line
Furthermore disallowing these makes things simpler as there is now only
one way to add a doc comment to a decl or struct field.
Using this in its current state would be a bug as it could cause line
comments to be deleted or a `// zig fmt: (on|off)` directive to be
missed.
Removing it doesn't currently cause any test failures, if a reason for
its continued existence is discovered in the future another solution
will have to be found.
This type is not widely applicable enough to be a public part of the
public interface of the std.
The current implementation in only fully utilized by the zig fmt
implementation, which could benefit by even tighter integration as
will be demonstrated in the next commit. Therefore, move the current
io.AutoIndentingStream to lib/std/zig/render.zig.
The C backend of the self hosted compiler also use this type currently,
but it does not require anywhere near its full complexity. Therefore,
implement a greatly simplified version of this interface in
src/codegen/c.zig.
I noticed that the write function does not properly use non-blocking
I/O. This file needs to be reworked for evented I/O to properly take
advantage of non-blocking writes to network sockets.
break, continue, blocks, bit_not, negation, identifiers, string
literals, integer literals, inline assembly
also gave multiline string literals a different node tag from regular
string literals, for code clarity and to avoid an unnecessary load from
token_tags array.
Conflicts:
* lib/std/zig/ast.zig
* lib/std/zig/parse.zig
* lib/std/zig/parser_test.zig
* lib/std/zig/render.zig
* src/Module.zig
* src/zir.zig
I resolved some of the conflicts by reverting a small portion of
@tadeokondrak's stage2 logic here regarding `callconv(.Inline)`.
It will need to get reworked as part of this branch.
This commit does not reach any particular milestone, it is
work-in-progress towards getting things to build.
There's a `@panic("TODO")` in translate-c that should be removed when
working on translate-c stuff.
I've added more of the ".def" files from mingw. The list is based on all the libraries referenced by the win32metadata project. (see https://github.com/marlersoft/zigwin32).
The fact that blocks may end in a semicolon but this semicolon is not
counted by recursive lastToken() evaluation on the sub expression causes
off-by-one errors for lastToken() on blocks currently.
To fix this, introduce BlockSemicolon and BlockTwoSemicolon following
the pattern used for trailing commas in e.g. builtin function arguments.
rename PtrType => PtrTypeBitRange, SliceType => PtrType
This rename was done as the current SliceType is used for non-bitrange
pointers as well as slices and because PtrTypeSentinel/PtrTypeAligned
are also used for slices. Therefore using the same Ptr prefix for all
these pointer/slice nodes is an improvement.
* thread/condition: fix PthreadCondition compilation
* thread/condition: add wait, signal and broadcast
This is like std.Thread.Mutex which forwards calls to `impl`; avoids
having to call `cond.impl` every time.
* thread/condition: initialize the implementation
After a right shift, top limbs may be all zero. However, without
normalization, the number of limbs is not going to change.
In order to check if a big number is zero, we used to assume that the
number of limbs is 1. Which may not be the case after right shifts,
even if the actual value is zero.
- Normalize after a right shift
- Add a test for that issue
- Check all the limbs in `eqlZero()`. It may not be necessary if
callers always remember to normalize before calling the function.
But checking all the limbs is very cheap and makes the function less
bug-prone.
This is a proof-of-concept of switching to a new memory layout for
tokens and AST nodes. The goal is threefold:
* smaller memory footprint
* faster performance for tokenization and parsing
* most importantly, a proof-of-concept that can be also applied to ZIR
and TZIR to improve the entire compiler pipeline in this way.
I had a few key insights here:
* Underlying premise: using less memory will make things faster, because
of fewer allocations and better cache utilization. Also using less
memory is valuable in and of itself.
* Using a Struct-Of-Arrays for tokens and AST nodes, saves the bytes of
padding between the enum tag (which kind of token is it; which kind
of AST node is it) and the next fields in the struct. It also improves
cache coherence, since one can peek ahead in the tokens array without
having to load the source locations of tokens.
* Token memory can be conserved by only having the tag (1 byte) and byte
offset (4 bytes) for a total of 5 bytes per token. It is not necessary
to store the token ending byte offset because one can always re-tokenize
later, but also most tokens the length can be trivially determined from
the tag alone, and for ones where it doesn't, string literals for
example, one must parse the string literal again later anyway in
astgen, making it free to re-tokenize.
* AST nodes do not actually need to store more than 1 token index because
one can poke left and right in the tokens array very cheaply.
So far we are left with one big problem though: how can we put AST nodes
into an array, since different AST nodes are different sizes?
This is where my key observation comes in: one can have a hash table for
the extra data for the less common AST nodes! But it gets even better than
that:
I defined this data that is always present for every AST Node:
* tag (1 byte)
- which AST node is it
* main_token (4 bytes, index into tokens array)
- the tag determines which token this points to
* struct{lhs: u32, rhs: u32}
- enough to store 2 indexes to other AST nodes, the tag determines
how to interpret this data
You can see how a binary operation, such as `a * b` would fit into this
structure perfectly. A unary operation, such as `*a` would also fit,
and leave `rhs` unused. So this is a total of 13 bytes per AST node.
And again, we don't have to pay for the padding to round up to 16 because
we store in struct-of-arrays format.
I made a further observation: the only kind of data AST nodes need to
store other than the main_token is indexes to sub-expressions. That's it.
The only purpose of an AST is to bring a tree structure to a list of tokens.
This observation means all the data that nodes store are only sets of u32
indexes to other nodes. The other tokens can be found later by the compiler,
by poking around in the tokens array, which again is super fast because it
is struct-of-arrays, so you often only need to look at the token tags array,
which is an array of bytes, very cache friendly.
So for nearly every kind of AST node, you can store it in 13 bytes. For the
rarer AST nodes that have 3 or more indexes to other nodes to store, either
the lhs or the rhs will be repurposed to be an index into an extra_data array
which contains the extra AST node indexes. In other words, no hash table needed,
it's just 1 big ArrayList with the extra data for AST Nodes.
Final observation, no need to have a canonical tag for a given AST. For example:
The expression `foo(bar)` is a function call. Function calls can have any
number of parameters. However in this example, we can encode the function
call into the AST with a tag called `FunctionCallOnlyOneParam`, and use lhs
for the function expr and rhs for the only parameter expr. Meanwhile if the
code was `foo(bar, baz)` then the AST node would have to be `FunctionCall`
with lhs still being the function expr, but rhs being the index into
`extra_data`. Then because the tag is `FunctionCall` it means
`extra_data[rhs]` is the "start" and `extra_data[rhs+1]` is the "end".
Now the range `extra_data[start..end]` describes the list of parameters
to the function.
Point being, you only have to pay for the extra bytes if the AST actually
requires it. There's no limit to the number of different AST tag encodings.
Preliminary results:
* 15% improvement on cache-misses
* 28% improvement on total instructions executed
* 26% improvement on total CPU cycles
* 22% improvement on wall clock time
This is 1/4 items on the checklist before this can actually be merged:
* [x] parser
* [ ] render (zig fmt)
* [ ] astgen
* [ ] translate-c
It now uses the log scope "gpa" instead of "std".
Additionally, there is a new config option `verbose_log` which enables
info log messages for every allocation. Can be useful when debugging.
This option is off by default.
Also known as "Struct-Of-Arrays" or "SOA". The purpose of this data
structure is to provide a similar API to ArrayList but instead of
the element type being a struct, the fields of the struct are in N
different arrays, all with the same length and capacity.
Having this abstraction means we can put them in the same allocation,
avoiding overhead with the allocator. It also saves a tiny bit of
overhead from the redundant capacity and length fields, since each
struct element shares the same value.
This is an alternate implementation to #7854.