It now uses the log scope "gpa" instead of "std".
Additionally, there is a new config option `verbose_log` which enables
info log messages for every allocation. Can be useful when debugging.
This option is off by default.
* move concurrency primitives that always operate on kernel threads to
the std.Thread namespace
* remove std.SpinLock. Nobody should use this in a non-freestanding
environment; the other primitives are always preferable. In
freestanding, it will be necessary to put custom spin logic in there,
so there are no use cases for a std lib version.
* move some std lib files to the top level fields convention
* add std.Thread.spinLoopHint
* add std.Thread.Condition
* add std.Thread.Semaphore
* new implementation of std.Thread.Mutex for Windows and non-pthreads Linux
* add std.Thread.RwLock
Implementations provided by @kprotty
The backing allocator may return a block that's actually bigger than the
one required by the user, use the correct quantity when keeping track of
the allocation ceiling.
Closes#6049
This is a temporary debugging trick you can use to turn segfaults into more helpful
logged error messages with stack trace details. The downside is that every allocation
will be leaked!
`std.builtin.StackTrace` gains a `format` function.
GeneralPurposeAllocator uses `std.log.err` instead of directly printing
to stderr. Some errors are recoverable.
The test runner is modified to fail the test run if any log messages of
"err" or worse severity are encountered.
self-hosted is modified to always print log messages of "err" severity
or worse even if they have not been explicitly enabled.
This makes GeneralPurposeAllocator available on the freestanding target.
We don't pass no-omit-frame-pointer in release safe by default, so it
also makes sense to not try to collect stack trace frames by default in
release safe mode.
This makes `@returnAddress()` return 0 for WebAssembly (when not using
the Emscripten OS) and avoids trying to capture stack traces for the
general purpose allocator on that target.
The high level Allocator interface API functions will now do a
`@returnAddress()` so that stack traces captured by allocator
implementations have a return address that does not include the
Allocator overhead functions. This makes `4` a more reasonable default
for how many stack frames to capture.
* std.Mutex API is improved to not have init() deinit(). This API is
designed to support static initialization and does not require any
resource cleanup. This also happens to work around some kind of
stage1 behavior that wasn't letting the new allocator mutex code
get compiled.
* the general purpose allocator now returns a bool from deinit()
which tells if there were any leaks. This value is used by the test
runner to fail the tests if there are any.
* self-hosted compiler is updated to use the general purpose allocator
when not linking against libc.
`std.GeneralPurposeAllocator` is now available. It is a function that
takes a configuration struct (with default field values) and returns an
allocator. There is a detailed description of this allocator in the
doc comments at the top of the new file.
The main feature of this allocator is that it is *safe*. It
prevents double-free, use-after-free, and detects leaks.
Some deprecation compile errors are removed.
The Allocator interface gains `old_align` as a new parameter to
`resizeFn`. This is useful to quickly look up allocations.
`std.heap.page_allocator` is improved to use mmap address hints to avoid
obtaining the same virtual address pages when unmapping and mapping
pages. The new general purpose allocator uses the page allocator as its
backing allocator by default.
`std.testing.allocator` is replaced with usage of this new allocator,
which does leak checking, and so the LeakCheckAllocator is retired.
stage1 is improved so that the `@typeInfo` of a pointer has a lazy value
for the alignment of the child type, to avoid false dependency loops
when dealing with pointers to async function frames.
The `std.mem.Allocator` interface is refactored to be in its own file.
`std.Mutex` now exposes the dummy mutex with `std.Mutex.Dummy`.
This allocator is great for debug mode, however it needs some work to
have better performance in release modes. The next step will be setting
up a series of tests in ziglang/gotta-go-fast and then making
improvements to the implementation.
* introduce std.ArrayListUnmanaged for when you have the allocator
stored elsewhere
* move std.heap.ArenaAllocator implementation to its own file. extract
the main state into std.heap.ArenaAllocator.State, which can be
stored as an alternative to storing the entire ArenaAllocator, saving
24 bytes per ArenaAllocator on 64 bit targets.
* std.LinkedList.Node pointer field now defaults to being null
initialized.
* Rework self-hosted compiler Package API
* Delete almost all the bitrotted self-hosted compiler code. The only bit
rotted code left is in main.zig and compilation.zig
* Add call instruction to ZIR
* self-hosted compiler ir API and link API are reworked to support
a long-running compiler that incrementally updates declarations
* Introduce the concept of scopes to ZIR semantic analysis
* ZIR text format supports referencing named decls that are declared
later in the file
* Figure out how memory management works for the long-running compiler
and incremental compilation. The main roots are top level
declarations. There is a table of decls. The key is a cryptographic
hash of the fully qualified decl name. Each decl has an arena
allocator where all of the memory related to that decl is stored.
Each code block has its own arena allocator for the lifetime of
the block. Values that want to survive when going out of scope in
a block must get copied into the outer block. Finally, values must
get copied into the Decl arena to be long-lived.
* Delete the unused MemoryCell struct. Instead, comptime pointers are
based on references to Decl structs.
* Figure out how caching works. Each Decl will store a set of other
Decls which must be recompiled when it changes.
This branch is still work-in-progress; this commit breaks the build.