Currently on FreeBSD, osi_TryEvictVCache calls vgone() for our vnode
after checking if the given vcache is in use. vgone() then calls our
VOP_RECLAIM operation, which calls afs_vop_reclaim, which calls
afs_FlushVCache to finally actually flush the vcache.
The current approach has at least the following major issues:
- In afs_vop_reclaim, we return success even if afs_FlushVCache()
fails. This allows FreeBSD to reuse the vnode for another file, but
the vnode is still being referenced by our vcache, which is
referenced by the global VLRU and various other structures. This
causes all kinds of weird errors, since we try to use the underlying
vnode for different files.
- After the relevant checks in osi_TryEvictVCache are done, another
thread can acquire a new reference to our vcache (this can happen
while vgone() is running up until the vnode is locked). This new
reference will cause afs_FlushVCache to fail.
- Our afs_vop_reclaim callback is called while the vnode is locked,
and can acquire afs_xvcache. Other code locks the vnode while
afs_xvcache is already held (such as afs_PutVCache -> vrele). This
can lead to deadlocks if two threads try to run these codepaths for
the same vnode at the same time.
- afs_vop_reclaim optionally acquires afs_xvcache based on the return
value of CheckLock(&afs_xvcache). However, CheckLock just returns if
that lock is locked by anyone, not if the current thread holds the
lock. This can result in the rest of the function running without
afs_xvcache actually being held if we drop AFS_GLOCK at any point.
- osi_TryEvictVCache() tries to vn_lock() the target vnode, but we may
already have another vnode locked in the current thread. If the
vnode we're trying to evict is a descendant of a vnode we already
have locked, this can deadlock.
To fix these issues, make some changes to how our vcache management
works on FreeBSD:
- Do not allow anyone to hold a new reference on a VI_DOOMED vnode.
We do this by checking for VI_DOOMED in osi_vnhold, and returning an
error if VI_DOOMED is set.
- In afs_vop_reclaim, panic if afs_FlushVCache fails. With the new
VI_DOOMED check, afs_FlushVCache show now never fail; and if it
somehow does, panic'ing immediately is better than corrupting
various structures and panic'ing later on.
- Move around some of the relevant locking in afs_vop_reclaim to fix
the lock-related issues.
- In osi_TryEvictVCache, don't wait for the vnode lock (LK_NOWAIT);
treat the vnode as "in use" if we can't immediately obtain the lock.
Thanks to tcreech@tcreech.com and kaduk@mit.edu for insight and help
investigating the relevant issues.
FIXES 135041
Change-Id: I23e94ecebbddc8c68a8f4ea918d64efd0f9f9dfd
Reviewed-on: https://gerrit.openafs.org/13972
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
These typedefs have been present since commit
a41175cfbbf4d06ccfe14ae54bef8b7464ecd80b
"initial-darwin-support-20010327"; at least some of this material was
obtained directly from IBM after the initial code import.
Based on research of old Darwin source code and kernel documentation,
the Event Trace Analysis Package (ETAP) was a lock-profiling interface
provided in older versions of Mach and xnu. ETAP was not enabled by
default; the kernel had to be recompiled with certain options to enable
it. Support for ETAP was removed from the xnu tree sometime between
xnu-517 (10.3 Panther) and xnu-792 (10.4 Tiger), although some
references remain in the latter under PPC support (osfmk/ppc/hw_lock.s).
All remaining references to etap_event_t disappeared when PPC support
was removed, some time between xnu-1456.1.26 (10.6 Snow Leopard) and
xnu-1699.24.8 (10.7.2 Lion).
Therefore, it is possible that these typedefs were needed in the past by
(IBM/Transarc) AFS to support use of some lock APIs (e.g.,
simple_lock_init, usimple_lock_init) after the ETAP code was withdrawn
from xnu. However, these typedefs have probably always been vestigial
for OpenAFS, because OpenAFS has never used any lock API that took
etap_event_t as an argument.
Regardless, OpenAFS does not need these definitions to build and run on
any currently supported version of macOS.
Remove the vestigial code.
No functional change should be incurred by this commit.
Change-Id: I39b3f82a8933d15ef5b5de5eb92366c0a31f8bb6
Reviewed-on: https://gerrit.openafs.org/14219
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Marcio Brito Barbosa <mbarbosa@sinenomine.net>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
This code has been dead since its introduction, because XAFS_DARWIN_ENV
is a typo for AFS_DARWIN_ENV.
Introduced from day 1 of DARWIN support with commit
a41175cfbbf4d06ccfe14ae54bef8b7464ecd80b
"initial-darwin-support-20010327".
No functional change should be incurred by this commit.
Change-Id: I6b74f01b4dd1230559ac8d75f0644071357f38b7
Reviewed-on: https://gerrit.openafs.org/14218
Reviewed-by: Marcio Brito Barbosa <mbarbosa@sinenomine.net>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Since commit 130144850c6d05bc69e06257a5d7219eb98697d8 "xstat: cm xstat
time values are 32 bit", OpenAFS has had two timeval definitions:
osi_timeval_t and osi_timeval32_t. Since they are functionally
equivalent, convert all references to osi_timeval_t to osi_timeval32_t.
This makes clear that this struct is always expected to contain 32-bit
members for tv_sec and tv_usec.
There are still a few platforms where osi_timeval32_t is mistakenly
defined with 64-bit members; these will be addressed in future commits.
No functional change should be incurred by this commit.
Change-Id: I3e8e44235e813571723fcd114194f6cb83de90e4
Reviewed-on: https://gerrit.openafs.org/14215
Reviewed-by: Andrew Deason <adeason@sinenomine.net>
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Cheyenne Wills <cwills@sinenomine.net>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
osi_SetTime has been dead code since the original IBM code import.
Remove it from the tree.
No functional change is incurred by this commit.
Change-Id: I25612a044ad550d798003979afc6845e502ebe3b
Reviewed-on: https://gerrit.openafs.org/14191
Reviewed-by: Andrew Deason <adeason@sinenomine.net>
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Commit c861bb0d779b54236b63eda87d9dfaf7792d1659 "Additional UKERNEL
headers, prototyping and other fixes" added the following lines to
src/rx/rx_prototypes.h:
#if defined(UKERNEL) && !defined(osi_GetTime)
extern int osi_GetTime(struct timeval *tv);
#endif
However, this appears to be redundant with the declaration in
src/afs/afs_prototypes.h:
#ifdef UKERNEL
...
extern int osi_GetTime(struct timeval *tv);
...
#endif
which was added much earlier with commit
8f2df21ffe59e9aa66219bf24656775b584c122d
"pull-prototypes-to-head-20020821".
Remove the redundant declaration in rx/rx_prototypes.h.
No functional change is incurrred by this commit.
Change-Id: I2032d302e862eed47250357e604cba4f26e89814
Reviewed-on: https://gerrit.openafs.org/14192
Reviewed-by: Andrew Deason <adeason@sinenomine.net>
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Extern declarations for the xstats recording areas have been commented
out since 8f2df21ffe59e9aa66219bf24656775b584c122d
"pull-prototypes-to-head-20020821".
Remove the vestigial comments.
No functional change is incurred by this commit.
Change-Id: Ieef9a4b21e78db8d5427bed7b621ba043663b1d1
Reviewed-on: https://gerrit.openafs.org/14197
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Reviewed-by: Andrew Deason <adeason@sinenomine.net>
Tested-by: BuildBot <buildbot@rampaginggeek.com>
afs_GetCMSTats, afs_AddToMean, and macro AFS_MEANCNT have been dead code
since the original IBM code import. Remove them from the tree.
No functional change is incurred by this commit.
Change-Id: Icd6aeff7896d69a4d334531b5e0c632d807457ce
Reviewed-on: https://gerrit.openafs.org/14196
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Reviewed-by: Andrew Deason <adeason@sinenomine.net>
Tested-by: BuildBot <buildbot@rampaginggeek.com>
For 32-bit Linux (e.g., arch i586), AFS_LINUX_64BIT_KERNEL is not
defined, so osi_timeval32_t is defined as a typedef of the native
'timeval'. However, as of commit
c766d1472c70d25ad475cf56042af1652e792b23 "y2038: hide
timeval/timespec/itimerval/itimerspec types" (Linux 5.6), the native
timeval struct is no longer available. On such a kernel, the OpenAFS
build will fail because osi_timeval32_t is not properly defined.
Instead, add new conditionals to properly define osi_timeval32_t for
this platform.
Change-Id: I1eddeeb3651dcd3c55920ab1d2ad2838f4729bdd
Reviewed-on: https://gerrit.openafs.org/14216
Reviewed-by: Cheyenne Wills <cwills@sinenomine.net>
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Make a few changes to osi_vnhold and AFS_FAST_HOLD:
- Currently, the second argument of osi_vnhold ("retry") is never used
by any implementation. Get rid of it.
- AFS_FAST_HOLD() is the same as osi_vnhold(). Get rid of
AFS_FAST_HOLD, and just have all callers use osi_vnhold instead.
- Allow osi_vnhold to return an error, and adjust callers to handle
it.
- Change osi_vnhold to be a real function, instead of a macro, to make
nontrivial implementations less cumbersome.
Most platforms never return an error from osi_vnhold(), so the added
code paths to check the return value of osi_vnhold() will not trigger.
However, this lets us add future commits that do make osi_vnhold()
return an error.
Change-Id: Id2f3717be6c305d06305685247ac789815e1ebf7
Reviewed-on: https://gerrit.openafs.org/13971
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
In the vlserver, when we add a new vlentry or extent block, we grow
the VLDB by doing something like this:
vital_header.eofPtr += sizeof(item);
Since we don't check for overflow, and all of our offset-related
variables are signed 32-bit integers, this can cause some odd behavior
if we try to grow the database to be over 2 GiB in size.
To avoid this, change the two places in vlserver code that grow the
database to use a new function, grow_eofPtr(), which checks for 31-bit
overflow. If we are about to overflow, log a message and return an
error.
See the following for a specific example of our "odd behavior" when we
overflow the 2 GiB limit in the VLDB:
With 1 extent block, we can create 14509076 vlentries successfully. On
the 14509077th vlentry, we'll attempt to write the entry to offset
2147483560 (0x7FFFFFA8). Since a vlentry is 148 bytes long, we'll
write all the way through offset 2147483707 (0x8000003B), which is
over the 31-bit limit.
In the udisk subsystem, this results in writing to page numbers
2097151, and -2097152 (since our ubik pages are 1k, and going over the
31-bit limit causes us to treat offsets as negative). These pages
start at physical offsets 2147482688 (0x7FFFFC40) and -2147483584
(-0x7FFFFFC0) in our vldb.DB0 (where offset is page*1024+64).
Modifying each of these pages involves reading in the existing page
first, modifying the parts we are changing, and writing it back. This
works just fine for 2097151, but of course fails for -2097152. The
latter fails in DReadBuffer when eventually our pread() fails with
EINVAL, and causes ubik to log the message:
Ubik: Error reading database file: errno=22
But when DReadBuffer fails, DReadBufferForWrite assumes this is due to
EOF, and just creates a new buffer for the given page (DNewBuffer).
So, the udisk_write() call ultimately succeeds.
When we go to flush the dirty data to disk when committing the
transaction, after we have successfully written the transaction log,
DFlush() fails for the -2097152 page when the pwrite() call eventually
fails with EINVAL, causing ubik to panic, logging the messages:
Ubik PANIC:
Writing Ubik DB modifications
When the vlserver gets restarted by bosserver, we then process the
transaction log, and perform the operations in the log before starting
up (ReplayLog). The log records the actual data we wrote, not split
into pages, and the log-replaying code writes directly to the db
usying uphys_write instead of udisk_write. So, because of this, the
write actually succeeds when replaying the log, since we just write
148 bytes to offset 2147483624 (0x7FFFFFE8), and no negative offsets
are used.
The vlserver will then be able to run, but will be unable to read that
newly-created vlentry, since it involves reading a ubik page beyond
the 31-bit boundary. That means trying to lookup that entry will fail
with i/o errors, and as well as any entry on the same hash chains as
the new entry (since the new entry will be added to the head of the
hash chain). Listing all entries in the database will also just show
an empty database, since our vital_header.eofPtr will be negative, and
we determine EOF by comparing our current blockindex to the value in
eofPtr.
Change-Id: Ie0b7ac61f9121fa265686449efbae8e18edb1896
Reviewed-on: https://gerrit.openafs.org/14180
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Reviewed-by: Cheyenne Wills <cwills@sinenomine.net>
Building with gcc-10.1 produces a warning (error if --enable-checking)
in vol-salvage.c
error: ‘%s’ directive output may be truncated writing up to 755 bytes
into a region of size 255 [-Werror=format-truncation=]
809 | snprintf(inodeListPath, 255, "%s" OS_DIRSEP "salvage.inodes.%s.%d", tdir, name,
Use strdup/asprintf to allocate the buffer dynamically instead of using
a buffer with a hardcoded size.
Change-Id: Ib2f01c2eb73c7abc162be2b1939e55688a81f812
Reviewed-on: https://gerrit.openafs.org/14207
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Currently, and since OpenAFS 1.0, if write() fails here, we leak the
file descriptor. A write() failure should be very unlikely, but close
the fd to make sure we avoid the leak.
Change-Id: I4e8ed4216c4aa5041232fc798a7bc59f6a5570d9
Reviewed-on: https://gerrit.openafs.org/14213
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Call shutdown_rx() and shutdown_rxevent() near the end of our shutdown
sequence, in order to free various Rx resources and avoid memory
leaks.
Change-Id: Id2e912295cf760b5ad83057487e6c4c4fadda11b
Reviewed-on: https://gerrit.openafs.org/13719
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Tested-by: BuildBot <buildbot@rampaginggeek.com>
The Linux function __pagevec_lru_add is no longer exported in Linux
5.7-rc1 commit bde07cfc65da5fe6c63fe23f035f5ccc0ffd89e0
"mm/swap.c: not necessary to export __pagevec_lru_add()".
As a replacement, the Linux function lru_cache_add_file can be used for
adding a page to the lru cache. The internal processing of
lru_cache_add_file manages its own internal pagevec and performs the
following:
get_page(...)
if(!pagevec_add(...))
__pagevec_lru_add_file(...)
Introduce an autoconf test for lru_cache_add_file and replace the calls
associated with __pagevec_lru_add with lru_cache_add_file.
NOTE: see Linux commit a0b8cab3b9b2efadabdcff264c450ca515e2619c
"mm: remove lru parameter from __pagevec_lru_add and remove parts of
pagevec API" as a reference for this change.
The lru_cache_add_file was introduced in Linux 2.6.28, therefore this
change affects systems with Linux 2.6.28 kernels and later.
Change-Id: I12b32fd5061fc136f8b96ef3605e0bab736ca9ed
Reviewed-on: https://gerrit.openafs.org/14159
Reviewed-by: Andrew Deason <adeason@sinenomine.net>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Define static functions afs_lru_cache_init, afs_lru_cache_add and
afs_lru_cache_finalize to handle interfacing with Linux's lru
facilities.
This change's primary purpose is to isolate the preprocessor
conditionals associated with the details of the system lru interfaces to
just these functions and to simplify the areas that utilize lru caching
by removing the preprocessor conditionals.
As Linux's lru facilities change, additional conditional code will be
needed.
Change-Id: I74c94bb712359975e3fd1df85f1b338b215f61b0
Reviewed-on: https://gerrit.openafs.org/14167
Reviewed-by: Andrew Deason <adeason@sinenomine.net>
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
We are hitting the net here; we certainly should not be holding
AFS_GLOCK while waiting for the server's response.
Found via FreeBSD WITNESS.
Change-Id: Ie727db27adaeed23ac8cff7665143bae2ce2ede8
Reviewed-on: https://gerrit.openafs.org/14181
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Inside tkt_DecodeTicket5 (rxkad/ticket5.c) function, keysize is calculated
using krb5_enctype_keybits and then dividing number of bits by 8. For 3DES
number of keybits are 168, so keysize comes out to 21(168/8). However
actual keysize of 3DES key is 24. This keysize is passed to
_afsconf_GetRxkadKrb5Key where keysize comparison happens, since there is
keysize mismatch it returns AFSCONF_BADKEY.
To fix this issue get keysize from krb5_enctype_keysize function instead
of krb5_enctype_keybits. Thanks to John Janosik (jpjanosi@us.ibm.com)
for analyzing and fixing this issue.
Change-Id: Ia6f70b878feaa91855f9544ec1de81a6196a85a8
Reviewed-on: https://gerrit.openafs.org/14203
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Michael Meffie <mmeffie@sinenomine.net>
Reviewed-by: Andrew Deason <adeason@sinenomine.net>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Commit 8d939c08 (rx: avoid nat ping during shutdown) added a call
to shutdown_rx() inside the DARWIN shutdown sequence, before the rx
socket was closed. From the commit message, it sounds like this was
done to avoid NAT pings from calling osi_NetSend during the shutdown
sequence after the rx socket was closed; calling shutdown_rx() before
closing the socket would cause any connections we had to be destroyed
first, avoiding that.
The problem with this is that this means shutdown_rx() is called when
osi_StopNetIfPoller is called, which is much earlier than some other
portions of the shutdown sequence; some of which may hold references
to e.g. rx connections. If we try to, for instance, destroy an rx
connection after shutdown_rx() is called, we could panic.
An earlier version of that commit (gerrit PS1) just tried to insert a
check before the relevant osi_NetSend call, making us just skip the
osi_NetSend if the shutdown sequence had been started. So to avoid the
above issue, try to implement that approach instead. And instead of
doing it just for NAT pings, we can do it for almost all osi_NetSend
calls (besides those involved in the shutdown sequence itself), by
checking this in rxi_NetSend. Also return an error (ESHUTDOWN) if we
skip the osi_NetSend call, so we're not completely silent about doing
so.
This means we also remove the call to shutdown_rx() inside DARWIN's
osi_StopNetIfPoller(). This allows us to interact with Rx objects
during more of the shutdown process in cross-platform code.
Change-Id: I4e631b28d090635aeacd59de0fd237d572f97e93
Reviewed-on: https://gerrit.openafs.org/13718
Reviewed-by: Mark Vitale <mvitale@sinenomine.net>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Commit a455452d (LINUX 5.3: Add comments for fallthrough switch cases)
added the special /* fall through */ comment to various switch/case
blocks, in order to avoid implicit-fallthrough warnings from causing
the build to fail when building the Linux kernel module.
In this commit, add additional /* fall through */ comments to the rest
of the tree where falling through is intentional. Add a "break;" in one
place in dumptool.c where falling through seems like a mistake, and flag
certain functions as AFS_NORETURN to avoid needing to explicitly break
or fallthrough.
Check for the availability of the -Wimplicit-fallthrough compiler flag
and use it when --enable-checking is set, to prevent additional cases
from creeping into the tree.
Note: the -Wimplicit-fallthrough compiler flag was added in gcc 7.
Change-Id: Iae34e7969606603da8358d7cfa5fd04279b218dc
Reviewed-on: https://gerrit.openafs.org/14125
Reviewed-by: Andrew Deason <adeason@sinenomine.net>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Tested-by: BuildBot <buildbot@rampaginggeek.com>
In salavageserver -parallel option takes "all<number>" argument.
However the code does not parse the numeric part correctly. Due
to this, only single instance of salvageserver process was running
even if we provide the larger number with "all" argument.
With this fix, numeric part of "all" argument will be parsed
correctly and will start required number of salvageserver instances.
Change-Id: Ib6318b1d57d04fecb84915e2dabe40930ea76499
Reviewed-on: https://gerrit.openafs.org/14201
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Andrew Deason <adeason@sinenomine.net>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Use the AX_APPEND_COMPILE_FLAGS macro to test and set compiler
specific flags.
Remove the OPENAFS_GCC_SUPPORTS_MARCH check entirely (and the
associated P5PLUS_KOPTS), since nothing has used it for quite some
time.
Change-Id: Ic9626c52ac62cf83d4b8c787aa5aa966e558a781
Reviewed-on: https://gerrit.openafs.org/14132
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Tested-by: BuildBot <buildbot@rampaginggeek.com>
In urecovery_Interact, if any of our operations fail around
calling DISK_GetFile, we will jump to FetchEndCall and eventually
unlink 'pbuffer'. But if we failed before opening our .DB0.TMP file,
the contents of 'pbuffer' will not be initialized yet.
During most iterations of the recovery loop, the contents of 'pbuffer'
will be filled in from previous loops, and it should always stay the
same, so it's not a big problem. But if this is the first iteration of
the loop, the contents of 'pbuffer' may be stack garbage.
Solve this in two ways. To make sure we don't use garbage contents in
'pbuffer', memset the whole thing to zeroes at the beginning of
urecovery_Interact(). And then to make sure we're not reusing
'pbuffer' contents from previous iterations of the loop, also clear
the first character to NUL each time we arrive at this area of the
recovery code. And avoid unlinking anything if pbuffer starts with a
NUL.
Commit 44e80643 (ubik: Avoid unlinking garbage) fixes the same issue,
but only fixed it in the SDISK_SendFile codepath in remote.c.
Change-Id: Ica39e66efa89562068a4be3a14b2d13594b77f6d
Reviewed-on: https://gerrit.openafs.org/14153
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Marcio Brito Barbosa <mbarbosa@sinenomine.net>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Switch to using the m4 macros from autoconf-archive in our
src/external mechanism, instead of manually-copied versions in src/cf.
The src/external copy of ax_gcc_func_attribute.m4 is identical to the
existing copy in src/cf, so that should incur no changes. There are
also a few new macros pulled in, but they are currently unused.
Increase our AC_PREREQ in configure.ac to 2.64, to match the AC_PREREQ
in some of the new files.
Change-Id: I8acfe4df7b9a22d9b9e69004c3438034a2dacadb
Reviewed-on: https://gerrit.openafs.org/14135
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Cheyenne Wills <cwills@sinenomine.net>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Add autoconf-archive to the src/external mechanism, so we can more
easily import and update the AX_* m4 macros we pull in from
autoconf-archive. Commits are imported from
<git://git.savannah.gnu.org/autoconf-archive.git>.
We already have a copy of ax_gcc_func_attribute.m4 in the tree, so
include that in the list of files. While we're here, also include a
few more macros for checking compiler flags, which will be used in
subsequent commits.
Change-Id: I8c6288fc1d48a47837ca08f8b9207e0ada921af8
Reviewed-on: https://gerrit.openafs.org/14133
Reviewed-by: Cheyenne Wills <cwills@sinenomine.net>
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Add change descriptions for commits not in a stable release.
Change-Id: Ib1d5ce9f558279660abb2473ce8a9fac4fcefa8d
Reviewed-on: https://gerrit.openafs.org/13673
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Tested-by: Benjamin Kaduk <kaduk@mit.edu>
Pull in all the updates to NEWS that occurred on the 1.8.x branch
in preparation for adding entries for 1.9.0.
Change-Id: I713d1576ef96793f24824f909b26da802b21ec23
Reviewed-on: https://gerrit.openafs.org/14103
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Ever since commits 170dbb3c (rx: Use opr queues) and d9fc4890 (rx: Fix
test for end of call queue for LWP), rx_GetCall checks if the current
call is the last one on rx_incomingCallQueue by doing this:
opr_queue_IsEnd(&rx_incomingCallQueue, cursor)
But opr_queue_IsEnd checks if the given pointer is the _end_ of the
last; that is, if it's the end-of-list sentinel, not an item on the
actual list. Testing for the last item in a list is what
opr_queue_IsLast is for. This is the same convention that the old Rx
queues used, but 170dbb3c just accidentally replaced queue_IsLast with
opr_queue_IsEnd (instead of opr_queue_IsLast), and d9fc4890 copied the
mistake.
So because this is inside an opr_queue_Scan loop, opr_queue_IsEnd will
never be true, so we'll never enter this block of code (unless we are
the "fcfs" thread). This means that an incoming Rx call can get stuck
in the incoming call queue, if all of the following are true:
- The incoming call consists of more than 1 packet of incoming data.
- The incoming call "waits" when it comes in (that is, there are no
free threads or the service is over quota).
- The "fcfs" thread doesn't scan the incoming call queue (because it
is idle when the call comes in, but the relevant service is over
quota).
To fix this, just use opr_queue_IsLast here instead of
opr_queue_IsEnd.
Change-Id: I04b90b1279f81dc518eb61e7bd450e3c0be37a77
Reviewed-on: https://gerrit.openafs.org/14158
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Currently, the rx/event-t tests schedule a bunch of events up to 3
seconds in the future, and then we sleep for 3 seconds to give them a
chance to run. Since we're cutting it so close, this can rarely result
in a few events not being run (observed occasionally on FreeBSD 12.1,
where we failed to run about 3 events out of 10000).
To avoid this, just sleep for 4 seconds instead of 3. Also print out a
little more info regarding the number of fired/cancelled events, so we
can see the event count when it's wrong.
Change-Id: I6269bea2c245aeed00c129ff638423d0fa81ad23
Reviewed-on: https://gerrit.openafs.org/14160
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
The format string for CM_TRACE_GMAP takes 4 substitutions, but
afs_linux_mmap only supplies 3. This results in malformed output from
fstrace:
Type mismatch, using raw print.
Gn_map vp 0x%lx addr 0x%lx len 0x%x off 0x%x (afs / zcm)raw op
701087775, time 715.322573, pid 9644
p0:0xc0a66ec0 p1:0x8b81a000 p2:131072
Repair the recording of CM_TRACE_GMAP.
Change-Id: I2b7592e68cb42f5ae490ee8771558e5cc5a2181e
Reviewed-on: https://gerrit.openafs.org/14168
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Currently, 'softsig-helper -buserror' causes a SIGBUS on most
platforms, but can result in SIGSEGV on FreeBSD by default (at least
on 11.3-RELEASE). Skip the test on FreeBSD, until we can provide a
more reliable way to generate SIGBUS.
Note that when the sysctl machdep.prot_fault_translation is set to 1,
'softsig-helper -buserror' generates a SIGBUS instead of SIGSEGV,
suggesting that generating a SIGBUS here is the old 'compat' behavior.
When machdep.prot_fault_translation is 0 (the default), the code path
in the FreeBSD kernel that dictates whether to send a SIGBUS or
SIGSEGV in this situation depends on some autodetection heuristics,
and so may produce different results depending on FreeBSD releases or
even compiler settings (due to detection of ABI based on some ELF
notes in the relevant binary).
For some details on this sysctl, see
<https://www.freebsd.org/news/status/report-2019-07-2019-09.html#Signals-delivered-on-unhandled-Page-Faults>
or the FreeBSD source code. In 11.3-RELEASE, the decision to issue a
SIGBUS or SIGSEGV can be found around sys/amd64/amd64/trap.c:355.
Change-Id: Ib75b43cc12302532ee87a3744fc364424f2a3ca6
Reviewed-on: https://gerrit.openafs.org/14145
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Currently we call vinvalbuf(9) in a few places while holding
AFS_GLOCK, but AFS_GLOCK is a non-sleepable lock (struct mtx), and
vinvalbuf can sleep. This can trigger a panic in some rare conditions,
with the message:
Sleeping thread (tid 100179, pid 95481) owns a non-sleepable lock
To avoid this, drop AFS_GLOCK around a few places that call
vinvalbuf().
Change-Id: I58acb144b6ffa007675402e7639b63ff3745dec5
Reviewed-on: https://gerrit.openafs.org/13970
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Tested-by: BuildBot <buildbot@rampaginggeek.com>
In FBSD/osi_vnops.c, we have a few abstractions (e.g. MA_VOP_UNLOCK)
that used to expand to different things for older FreeBSD versions.
Currently, they always expand to the same thing, so just remove the
abstractions.
While we are changing these calls, also change one instance of
MA_VOP_LOCK to vn_lock (instead of VOP_LOCK), since we're not usually
supposed to call VOP_LOCK directly, according to the VOP_LOCK(9)
manpage. The MA_VOP_LOCK call was added in commit bd707fb7
(freebsd-almost-working-client-20020216), seemingly by mistake.
Change-Id: Ia0f28fe658057e87d9103a72296ab899dc762fb6
Reviewed-on: https://gerrit.openafs.org/13843
Reviewed-by: Tim Creech <tcreech@tcreech.com>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Currently, if we are building with -j2 or higher, we can easily fail
to build some libafs objects because vnode_if.h does not exist yet.
vnode_if.h is generated by the FreeBSD build, but none of our objects
depend on it, so during parallel builds it may not be available by the
time we build, for example, src/external/heimdal/hcrypto/sha256.c.
This results in build errors that can look like this:
--- sha256-kernel.o ---
cc -I. -I.. -I../nfs [...]/src/external/heimdal/hcrypto/sha256.c
In file included from [...]/src/external/heimdal/hcrypto/sha256.c:34:
In file included from [...]/src/crypto/hcrypto/kernel/config.h:30:
In file included from [...]/src/afs/sysincludes.h:354:
/usr/src/sys/sys/vnode.h:588:10: fatal error: 'vnode_if.h' file not found
#include "vnode_if.h"
^~~~~~~~~~~~
1 error generated.
*** [sha256-kernel.o] Error code 1
make[4]: stopped in [...]/src/libafs/MODLOAD
1 error
To avoid this, make all of our libafs objects depends on vnode_if.h.
[adeason@dson.org: Expanded commit message.]
Change-Id: I5a7a6ece8d5fbe6cf1a5b94451c8e8ae93fdc55f
Reviewed-on: https://gerrit.openafs.org/13983
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
The 'perl' binary may not be /usr/bin/perl, depending on the system.
For example, on modern FreeBSD it tends to be /usr/local/bin/perl
instead.
To avoid relying on perl to be in a specific location, just run via
/usr/bin/env instead, so we pick up perl from $PATH instead.
Change-Id: Ic8dc247c82342ff79dfa80426c489ccb8e3e1450
Reviewed-on: https://gerrit.openafs.org/14144
Tested-by: Andrew Deason <adeason@sinenomine.net>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Currently, our afs_vop_lookup on FBSD tries to only lock 'dvp' for
ISDOTDOT requests when LOCKPARENT and ISLASTCN are set. There are a
couple of problems with this:
- The conditional locking logic involving LOCKPARENT/ISLASTCN is only
relevant in very old FreeBSD releases (per-fs checking of these
flags for parent locking went away around the FreeBSD 6 era).
- Our current logic here is wrong anyway, since we try to lock 'dvp'
twice when those flags are set. This was mostly introduced by commit
2f6be821 (FBSD: band-aid vnode locking in lookup), which added a
lock/unlock pair for 'dvp' around the lock for 'vp', even though
'dvp' was unlocked several lines earlier.
This means that if we hit the relevant code path, we will deadlock,
since we try to lock 'dvp' twice. To avoid this, just remove the
relevant logic for LOCKPARENT/ISLASTCN, since it is only relevant for
old FreeBSD releases that are not supported by us or FreeBSD.
Add and rearrange some comments around here to try to more explicitly
explain the relevant locking rules.
[adeason@dson.org: Commit message rewrite, adding comments, removing
old FreeBSD code.]
Change-Id: Iaa2c55d82c50d5a8ab42c67b0996a2b4fb6e09e6
Reviewed-on: https://gerrit.openafs.org/12578
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
In afs_vop_lookup, the 'wantparent' variable doesn't actually change
any logic in the function. In the if() clause that it's used, the
value of 'wantparent' is only ever used if cnp->cn_nameiop is RENAME
and ISLASTCN is set. But if both of those are true, then the second
half of the if() conditional will always be true, so the value of
'wantparent' doesn't matter.
So to remove this confusing unused logic, remove the 'wantparent'
local var, and all its associated logic.
Issue spotted by kaduk@mit.edu.
Change-Id: Ia63b88d67d21cc2b81a0c25aa31ea60ab202b0a7
Reviewed-on: https://gerrit.openafs.org/14143
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Commit b61eac78 (Linux: setpag() may replace credentials) changed
PSetTokens2 to call crref() after _settok_setParentPag(), since
changing the parent PAG may change our credentials structure. But that
commit did not update the old pioctl PSetTokens, so -setpag
functionality remained broken on Linux for utilities that called the
old pioctl ('klog' is one such utility).
To fix this, we could copy the same code from PSetTokens2 into
PSetTokens. But instead just move this code into _settok_setParentPag
itself, to avoid code duplication. This commit also refactors
_settok_setParentPag a little to make the platform-specific ifdefs a
little easier to read through.
Change-Id: I65a165ebb1d823e690926de31b28a7728d2561b9
Reviewed-on: https://gerrit.openafs.org/14147
Reviewed-by: Andrew Deason <adeason@sinenomine.net>
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Yadavendra Yadav <yadayada@in.ibm.com>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Commit 48589b5d (Linux: Restore aklog -setpag functionality for kernel
2.6.32+) added code to SetToken() to copy our session keyring to the
parent process, in order to implement -setpag functionality. But this
was removed from SetToken() in commit 1a6d4c16 (Linux: fix aklog
-setpag to work with ktc_SetTokenEx), when the same code was moved to
ktc_SetTokenEx().
Add this code back to SetTokens(), so -setpag functionality can work
again with utilities that use older functions like ktc_SetToken, like
'klog'.
Change-Id: I68c9bf2e19783ea6f84b4c5ebf2ef188d1d8d6ad
Reviewed-on: https://gerrit.openafs.org/14146
Reviewed-by: Andrew Deason <adeason@sinenomine.net>
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
`make` is not necessarily installed, even if when all the other build
requirements are installed.
Add `make` to the list build requirements to complete the build
requirements. With this change it is possible to build the packages
after running the `yum-builddep` to install all of the needed build
requirements.
Change-Id: I032ba1f23d08468c5e21edc5662b20cc9498d1c9
Reviewed-on: https://gerrit.openafs.org/14119
Reviewed-by: Cheyenne Wills <cwills@sinenomine.net>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Tested-by: BuildBot <buildbot@rampaginggeek.com>
For our old-style "O" RPCs (e.g. VL_CreateEntry, instead of
VL_CreateEntryN), vlserver calls vldbentry_to_vlentry to convert to
the internal 'struct nvlentry' format. After all of the sites have
been copied to the internal format, we fill the remaining sites by
setting the serverNumber to BADSERVERID. For nvldbentry_to_vlentry, we
do this for NMAXNSERVERS sites, but for vldbentry_to_vlentry, we do
this for OMAXNSERVERS.
The thing is, both functions are filling in entries for a 'struct
nvlentry', which has NMAXNSERVERS 'serverNumber' entries. So for
vldbentry_to_vlentry, we are skipping setting the last few sites
(specifically, NMAXNSERVERS-OMAXNSERVERS = 13-8 = 5).
This can easily cause our O-style RPCs to write out entries to disk
that have uninitialized sites at the end of the array. For example, an
entry with one site should have server numbers that look like this:
serverNumber = {1, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}
That is, one real serverid (a '1' here), followed by twelve
BADSERVERIDs.
But for a VL_CreateEntry call, the 'struct nvlentry' is zeroed out
before vldbentry_to_vlentry is called, and so the server numbers in
the written entry look like this:
serverNumber = {1, 255, 255, 255, 255, 255, 255, 255, 0, 0, 0, 0, 0}
That is, one real serverid (a '1' here), followed by seven
BADSERVERIDs, followed by five '0's.
Most of the time, this is not noticeable, since our code that reads in
entries from disk stops processing sites when we encounter the first
BADSERVERID site (see vlentry_to_nvldbentry). However, if the entry
has 8 sites, then none of the entries will contain BADSERVERID, and so
we will actually process the trailing 5 bogus sites. This would appear
as 5 extra volume sites for a volume, most likely all for the same
server.
For VL_CreateEntry, the vlentry struct is always zeroed before we use
it, so the trailing sites will always be filled with 0. For
VL_ReplaceEntry, the trailing sites will be unchanged from whatever
was read in from the existing disk entry.
To fix this, just change the relevant loop to go through NMAXNSERVERS
entries, so we actually go to the end of the serverNumber (et al)
array.
This may appear similar to commit ddf7d2a7 (vlserver: initialize
nvlentry elements after read). However, that commit fixed a case
involving the old vldb database format (which hopefully is not being
used). This commit fixes a case where we are using the new vldb
database format, but with the old RPCs, which may still be used by old
tools.
Change-Id: Ic6882d1452963ca93403748917c313068acfdaab
Reviewed-on: https://gerrit.openafs.org/14139
Tested-by: Andrew Deason <adeason@sinenomine.net>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Fix warnings issued by recent versions of rpmbuild:
warning: Macro expanded in comment on line 110: %{afsvers}/...
warning: extra tokens at the end of %endif directive in line 1469:
%endif # build_userspace
warning: line 331: It's not recommended to have unversioned Obsoletes:
Obsoletes: openafs-client-compat
The first two warnings are just issues with comments, which apparently
are not completely ignored by rpmbuild. The third issue is a warning
about an unversioned "Obsoletes" directive. Remove the old Obsoletes for
openafs-client-compat, which was obsoleted no later than the 1.4.x
series (more than 10 years ago).
While here clean up the spec by removing the old cvs $Revsion$ keyword
from the comments at the top of the file, and removing an old commented
out setup directive.
Change-Id: I8d7a050ea6a0cc7a2d9a6af9a91d25ce545586e7
Reviewed-on: https://gerrit.openafs.org/14118
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Currently, opr_cache_init requires that opts->n_buckets is a power of
2 (since our underlying opr_dict requires this). However, callers may
want to pick a number of buckets based on some other value. Requiring
each caller to calculate the nearest power-of-2 is annoying, so
instead just have opr_cache_init itself calculate a nearby power of 2.
That is, with this commit, opts->n_buckets is allowed to not be a
power of 2; when it's not a power of 2, opr_cache_init will calculate
the next highest power of 2 and use that as the number of buckets.
Change-Id: Icd3c56c1fe0733e3dac964ea9a98ff7b436254e6
Reviewed-on: https://gerrit.openafs.org/14122
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Marcio Brito Barbosa <mbarbosa@sinenomine.net>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Our libafs build logic involves a few targets that 'cd' into a
per-kernel subdir: notably INSTDIRS and DESTDIRS (the targets to 'make
install' or 'make dest' our kernel modules) and COMPDIRS (the target
to setup/build the kernel module).
Both of these potentially 'cd' into a subdirectory (e.g. MODLOAD64),
and run some make rules. Since INSTDIRS and COMPDIRS are different
targets and don't depend on each other for many platforms, running
those rules can happen in parallel. After they 'cd' into the relevant
dir, they run a new 'make' in a subshell, and so underlying rules for
building e.g. AFS_component_version_number.c are not serialized.
So for a parallel build on, say, Solaris, we can encounter errors when
two sub-makes try to make AFS_component_version_number.c at the same
time, which looks something like this (with various lines output from
other sub-processes mixed in):
cd src && cd sys && gmake install
gmake[3]: Leaving directory '/[...]/src/libuafs'
rm -f AFS_component_version_number.c.NEW
/opt/developerstudio12.6/bin/cc [...] -D_KERNEL -DSYSV -dn -m64 -xmodel=kernel -xvector=%none -xregs=no%float -Wu,-save_args -o AFS_component_version_number.o -c AFS_component_version_number.c
mv: cannot access AFS_component_version_number.c.NEW
gmake[4]: *** [/[...]/src/config/Makefile.version:13: AFS_component_version_number.c] Error 2
gmake[4]: Leaving directory '/[...]/src/libafs/MODLOAD64'
gmake[3]: *** [Makefile:85: solaris_instdirs] Error 2
gmake[3]: *** Waiting for unfinished jobs....
To avoid this, just make INSTDIRS and DESTDIRS depend on COMPDIRS, so
we can make sure they don't run at the same time.
Change-Id: I2510e1894c44dd0864cf2eab5613b805342b6718
Reviewed-on: https://gerrit.openafs.org/14137
Tested-by: BuildBot <buildbot@rampaginggeek.com>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
The local variable tapeblocks in GetConfigParams matches a global
variable. Rename the local variable to avoid confusion with the global
name.
Change-Id: I1c30433696a35a74978ef0c23881c82054b416c5
Reviewed-on: https://gerrit.openafs.org/14128
Reviewed-by: Andrew Deason <adeason@sinenomine.net>
Reviewed-by: Benjamin Kaduk <kaduk@mit.edu>
Tested-by: BuildBot <buildbot@rampaginggeek.com>