* Offset updates take place using TransactionWriter
* Refactor TransactionWriter in current state server
* Refactor TransactionWriter in federation sender
* Refactor TransactionWriter in key server
* Refactor TransactionWriter in media API
* Refactor TransactionWriter in server key API
* Refactor TransactionWriter in sync API
* Refactor TransactionWriter in user API
* Fix deadlocking Sync API tests
* Un-deadlock device database
* Fix appservice API
* Rename TransactionWriters to Writers
* Move writers up a layer in sync API
* Document sqlutil.Writer interface
* Add note to Writer documentation
* Updated TransactionWriters, moved locks in roomserver, various other tweaks
* Fix redaction deadlocks
* Fix lint issue
* Rename SQLiteTransactionWriter to ExclusiveTransactionWriter
* Fix us not sending transactions through in latest events updater
* Initial pass at refactoring config (not finished)
* Don't forget current state and EDU servers
* More shifting around
* Update server key API tests
* Fix roomserver test
* Fix more tests
* Further tweaks
* Fix current state server test (sort of)
* Maybe fix appservices
* Fix client API test
* Include database connection string in database options
* Fix sync API build
* Update config test
* Fix unit tests
* Fix federation sender build
* Fix gobind build
* Set Listen address for all services in HTTP monolith mode
* Validate config, reinstate appservice derived in directory, tweaks
* Tweak federation API test
* Set MaxOpenConnections/MaxIdleConnections to previous values
* Update generate-config
* Add support for logs in StreamingToken
Tokens now end up looking like `s11_22|dl-0-123|ab-0-12224`
where `dl` and `ab` are log names, `0` is the partition and
`123` and `12224` are the offsets.
* Also test reserialisation
* s/|/./g so tokens url escape nicely
* Use TransactionWriter on other component SQLites
* Fix sync API tests
* Fix panic in media API
* Fix a couple of transactions
* Fix wrong query, add some logging output
* Add debug logging into StoreEvent
* Adjust InsertRoomNID
* Update logging
* Deduplicate FS database, add some EDU persistence groundwork
* Extend TransactionWriter to use optional existing transaction, use that for FS SQLite database writes
* Fix build due to bad keyserver import
* Working EDU persistence
* gocyclo, unsurprisingly
* Remove unused
* Update copyright notices
* Add a bit more logging to the fedsender
* bugfix: continue sending PDUs if ones are added whilst sending another PDU
Without this, the queue goes back to sleep on `<-oq.notifyPDUs` which won't
fire because `pendingPDUs` is already > 0. This should fix a flakey sytest.
* Break if no txn is sent
* WIP syncapi work
* More debugging
* Bump GMSL version to pull in working Event.Redact
* Remove logging
* Make redactions work on v3+
* Fix more tests
* Move filter table to syncapi where it is used
* Implement /sync `limited` and read timeline limit from stored filters
We now fully handle `room.timeline.limit` filters (in-line + stored) and
return the right value for `limited` syncs.
* Update whitelist
* Default to the default timeline limit if it's unset, also strip the extra event correctly
* Update whitelist
* Make userapi responsible for checking access tokens
There's still plenty of dependencies on account/device DBs, but this
is a start. This is a breaking change as it adds a required config
value `listen.user_api`.
* Cleanup
* Review comments and test fix
* Groundwork for send-to-device messaging
* Update sample config
* Add unstable routing for now
* Send to device consumer in sync API
* Start the send-to-device consumer
* fix indentation in dendrite-config.yaml
* Create send-to-device database tables, other tweaks
* Add some logic for send-to-device messages, add them into sync stream
* Handle incoming send-to-device messages, count them with EDU stream pos
* Undo changes to test
* pq.Array
* Fix sync
* Logging
* Fix a couple of transaction things, fix client API
* Add send-to-device test, hopefully fix bugs
* Comments
* Refactor a bit
* Fix schema
* Fix queries
* Debug logging
* Fix storing and retrieving of send-to-device messages
* Try to avoid database locks
* Update sync position
* Use latest sync position
* Jiggle about sync a bit
* Fix tests
* Break out the retrieval from the update/delete behaviour
* Comments
* nolint on getResponseWithPDUsForCompleteSync
* Try to line up sync tokens again
* Implement wildcard
* Add all send-to-device tests to whitelist, what could possibly go wrong?
* Only care about wildcard when targeted locally
* Deduplicate transactions
* Handle tokens properly, return immediately if waiting send-to-device messages
* Fix sync
* Update sytest-whitelist
* Fix copyright notice (need to do more of this)
* Comments, copyrights
* Return errors from Do, fix dendritejs
* Review comments
* Comments
* Constructor for TransactionWriter
* defletions
* Update gomatrixserverlib, sytest-blacklist
* Per-user-per-device sync streams
* Tweaks
* Tweaks
* Pass full device into CompleteSync
* Set user IDs and device IDs properly in tests
* Add new test, fix TestNewEventAndWasPreviouslyJoinedToRoom
* nolint a function that is not used yet
* Add test for waking up single device
* Hopefully unstick test
* Try to ensure that TestCorrectStreamWakeup doesn't block forever
* Update tests
* sytest: Make 'Inbound federation can backfill events' pass
This breaks 'Outbound federation can backfill events' because now
we are returning the right number of events, which the previous
test was relying on.
Previously, /messages was backfilling the membership event, causing
the test to pass. Now we are no longer backfilling the membership
event due to the change in this commit, causing the test to fail.
The test should instead be returning the membership event locally
from synacpis database, but it doesn't do it fast enough, resulting
in a no-op /sync response with a next_batch=s0_0 which will never
pick up the local membership event when it rolls in. The test
does attempt to retry, but doesn't take the new next_batch=s1_0
resulting in it missing from the /messages response.
* Linting
* Refactor all postgres tables; start work on sqlite
* wip sqlite merges; database is locked errors to investigate and failing tests
* Revert "wip sqlite merges; database is locked errors to investigate and failing tests"
This reverts commit 26cbfc5b75ae2dc4fb31a838b917aa39d758f162.
* convert current room state table
* port over sqlite topology table
* remove a few functions
* remove more functions
* Share more code
* factor out completesync and a bit more
* Remove remaining code
* Initial syncapi storage refactor to share pq/sqlite code
This goes down a different route than https://github.com/matrix-org/dendrite/pull/985
which tried to even reduce the boilerplate of `ExecContext` etc. The previous pattern
fails badly when there are subtle differences in parameters and hence the shared
boilerplate to read from `QueryContext` breaks. Rather than attacking it at that level,
the main place where we want to reuse code is for the `syncserver.go` itself - the
database implementation which has lots of complex logic. So instead, this commit:
- Makes `invites_table.go` an interface.
- Makes `SyncServerDatasource` use that interface
- This means some functions are now identical for pq/sqlite, so factor them out
to a temporary `shared.Database` struct which will grow until it replaces all of
`SyncServerDatasource`.
* Missing files
* syncapi: Rename and split out tokens
Previously we used the badly named `PaginationToken` which was
used for both `/sync` and `/messages` requests. This quickly
became confusing because named fields like `PDUPosition` meant
different things depending on the token type. Instead, we now have
two token types: `TopologyToken` and `StreamingToken`, both of
which have fields which make more sense for their specific situations.
Updated the codebase to use one or the other. `PaginationToken` still
lives on as `syncToken`, an unexported type which both tokens rely on.
This allows us to guarantee that the specific mappings of positions
to a string remain solely under the control of the `types` package.
This enables us to move high-level conceptual things like
"decrement this topological token" to function calls e.g
`TopologicalToken.Decrement()`.
Currently broken because `/messages` seemingly used both stream and
topological tokens, though I need to confirm this.
* final tweaks/hacks
* spurious logging
* Review comments and linting
* Fix ordering when backfilling
The problem was that we weren't sorting the returned events
by depth when sending them back to the caller, instead we
were sorting by prev_events which is not the same thing.
* Fixup tests
* Limit database connections (#564)
- Add new options to the config file database:
max_open_conns: 100
max_idle_conns: 2
conn_max_lifetime: -1
- Implement connection parameter setup on the *DB (database/sql) in internal/sqlutil/trace.go:Open()
- Propagate the values in the form of DbProperties interface via all the
Open() and NewDatabase() functions
Signed-off-by: Tomas Jirka <tomas.jirka@email.cz>
* Fix wasm builds
* Remove file accidentally added from working tree
Co-authored-by: Tomas Jirka <tomas.jirka@email.cz>
* Correctly generate backpagination tokens for events which have the same depth
With tests. Unfortunately the code around here is hard to understand.
There will be a subsequent PR which fixes this up now that we have a test
harness in place.
* Add postgres impl
* More linting
* Fix psql statement so it actually works
* sql/backwards_extremities: Shift to table format and share code
This is an initial cut to reduce boilerplate at the storage layer.
It removes the need for 2x `_table.go` files, one for each DB engine,
replacing it with a single struct which has an interface which
implements the raw SQL statements.
The actual impl sits alongside the interface declaration which is
generally regarded as best practice (though no canonical sources).
Especially in this case where the impl is tiny (functions returning
strings) and relies heavily on the function signatures of the
table struct (for parameters), having the context in the same file
is useful.
* Remove _table redundancy
* Update gomatixserverlib
* Try to build invite stripped state if not given to us
* SendInvite improvements
* Transpose invite_room_state into invite_state.events for sync API
* Remove syncapi debugging output
* Use RespInviteV2
* Update gomatrixserverlib
* Send the invite event as a normal roomserver event too, for incorporating into room (should this be done by the roomserver automatically for invite inputs?)
* Federation sender use invite_room_state, room server try to insert membership state
* Check supported room versions on the invite endpoint
* Prevent roomserver query API from trying to handle requests for stub rooms
* Adding a nolint
* Replace IsRoomStub with RoomNIDExcludingStubs, fix query API to use that instead
* Review comments
* Implement history visibility checks for /backfill
Required for p2p to show history correctly.
* Add sytest
* Logging
* Fix two backfill bugs which prevented backfill from working correctly
- When receiving backfill requests, do not send the event that was in the original request.
- When storing backfill results, correctly update the backwards extremity for the room.
* hack: make backfill work multiple times
* add sqlite impl and remove logging
* Linting
* Use HeaderedEvent in syncapi
* Update notifier test
* Fix persisting headered event
* Clean up unused API function
* Fix overshadowed err from linter
* Write headered JSON to invites table too
* Rename event_json to headered_event_json in syncapi database schemae
* Fix invites_table queries
* Update QueryRoomVersionCapabilitiesResponse comment
* Fix syncapi SQLite
* Use a fork of pq which supports userCurrent on wasm
* Use sqlite3_js driver when running in JS
* Add cmd/dendritejs to pull in sqlite3_js driver for wasm only
* Update to latest go-sqlite-js version
* Replace prometheus with a stub. sigh
* Hard-code a config and don't use opentracing
* Latest go-sqlite3-js version
* Generate a key for now
* Listen for fetch traffic rather than HTTP
* Latest hacks for js
* libp2p support
* More libp2p
* Fork gjson to allow us to enforce auth checks as before
Previously, all events would come down redacted because the hash
checks would fail. They would fail because sjson.DeleteBytes didn't
remove keys not used for hashing. This didn't work because of a build
tag which included a file which no-oped the index returned.
See https://github.com/tidwall/gjson/issues/157
When it's resolved, let's go back to mainline.
* Use gjson@1.6.0 as it fixes https://github.com/tidwall/gjson/issues/157
* Use latest gomatrixserverlib for sig checks
* Fix a bug which could cause exclude_from_sync to not be set
Caused when sending events over federation.
* Use query variadic to make lookups actually work!
* Latest gomatrixserverlib
* Add notes on getting p2p up and running
Partly so I don't forget myself!
* refactor: Move p2p specific stuff to cmd/dendritejs
This is important or else the normal build of dendrite will fail
because the p2p libraries depend on syscall/js which doesn't work
on normal builds.
Also, clean up main.go to read a bit better.
* Update ho-http-js-libp2p to return errors from RoundTrip
* Add an LRU cache around the key DB
We actually need this for P2P because otherwise we can *segfault*
with things like: "runtime: unexpected return pc for runtime.handleEvent"
where the event is a `syscall/js` event, caused by spamming sql.js
caused by "Checking event signatures for 14 events of room state" which
hammers the key DB repeatedly in quick succession.
Using a cache fixes this, though the underlying cause is probably a bug
in the version of Go I'm on (1.13.7)
* breaking: Add Tracing.Enabled to toggle whether we do opentracing
Defaults to false, which is why this is a breaking change. We need
this flag because WASM builds cannot do opentracing.
* Start adding conditional builds for wasm to handle lib/pq
The general idea here is to have the wasm build have a `NewXXXDatabase`
that doesn't import any postgres package and hence we never import
`lib/pq`, which doesn't work under WASM (undefined `userCurrent`).
* Remove lib/pq for wasm for syncapi
* Add conditional building to remaining storage APIs
* Update build script to set env vars correctly for dendritejs
* sqlite bug fixes
* Docs
* Add a no-op main for dendritejs when not building under wasm
* Use the real prometheus, even for WASM
Instead, the dendrite-sw.js must mock out `process.pid` and
`fs.stat` - which must invoke the callback with an error (e.g `EINVAL`)
in order for it to work:
```
global.process = {
pid: 1,
};
global.fs.stat = function(path, cb) {
cb({
code: "EINVAL",
});
}
```
* Linting
* bugfix: fix panic on new invite events from sytest
I'm unsure why the previous code didn't work, but it's
clearer, quicker and easier to read the `LastInsertID()` way.
Previously, the code would panic as the SELECT would fail
to find the last inserted row ID.
* sqlite: Fix UNIQUE violations and close more cursors
- Add missing `defer rows.Close()`
- Do not have the state block NID as a PRIMARY KEY else it breaks for blocks
with >1 state event in them. Instead, rejig the queries so we can still
have monotonically increasing integers without using AUTOINCREMENT (which
mandates PRIMARY KEY).
* sqlite: Add missing variadic function
* Use LastInsertId because empirically it works over the SELECT form (though I don't know why that is)
* sqlite: Fix invite table by using the global stream pos rather than one specific to invites
If we don't use the global, clients don't get notified about any invites
because the position is too low.
* linting: shadowing
* sqlite: do not use last rowid, we already know the stream pos!
* sqlite: Fix account data table in syncapi by commiting insert txns!
* sqlite: Fix failing federation invite
Was failing with 'database is locked' due to multiple write txns
being taken out.
* sqlite: Ensure we return exactly the number of events found in the database
Previously we would return exactly the number of *requested* events, which
meant that several zero-initialised events would bubble through the system,
failing at JSON serialisation time.
* sqlite: let's just ignore the problem for now....
* linting