Compare commits

...

101 Commits

Author SHA1 Message Date
Alexander Ziaee
70f5501e69
Merge cc0ed47523 into f2233ac33a 2024-11-26 19:27:55 +02:00
Zhenlei Huang
f2233ac33a ena: Remove \n from sysctl description
Some checks are pending
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, arm64, aarch64) (push) Waiting to run
sysctl(8) prints a newline after the description, no need for this extra
newline.

MFC after:	1 week
2024-11-26 23:52:54 +08:00
Allan Jude
9206c79961 usr.bin/netstat: -n should not print symbolic names
Some checks are pending
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, arm64, aarch64) (push) Waiting to run
In numeric mode, the default route is printed as "default" rather
than 0.0.0.0/0 or ::/0

From the man page:
"-n: Show network addresses and ports as numbers.
Normally netstat attempts to resolve addresses and ports, and display
them symbolically.  If the -n option is specified, the address is
printed numerically, according to the address family.
For more information regarding the Internet IPv4 ``dot format'', refer
to inet(3).  Unspecified, or `wildcard'', addresses and ports appear
as `*''."

Reported By:	rgrimes
Reviewed by:	emaste, ngie, eadler, seanc
Relnotes:	yes
Sponsored by:	Klara, Inc.
Differential Revision:	https://reviews.freebsd.org/D10320
2024-11-26 14:56:21 +00:00
Christos Margiolis
6d77827b96 sound: Remove unused CHN_F_SILENCE
No functional change intended.

Sponsored by:	The FreeBSD Foundation
MFC after:	2 days
Reviewed by:	dev_submerge.ch, markj, emaste
Differential Revision:	https://reviews.freebsd.org/D47664
2024-11-26 15:48:42 +01:00
Christos Margiolis
5317480967 sound: Remove CHN_F_SLEEPING
The KASSERT in chn_sleep() can be triggered if more than one thread
wants to sleep on a given channel at the same time. While this is not
really a common scenario, tools such as stress2, which use fork() and
the child process(es) inherit the parent's FDs as a result, we can end
up triggering such scenarios.

Fix this by removing CHN_F_SLEEPING altogether, which is not very useful
in the first place:
- CHN_BROADCAST() checks cv_waiters already, so there is no need to
  check CHN_F_SLEEPING as well.
- We can check whether cv_waiters is 0 in pcm_killchans(), instead of
  whether CHN_F_SLEEPING is not set.

Reported by:	dougm, pho (stress2)
Sponsored by:	The FreeBSD Foundation
MFC after:	2 days
Reviewed by:	dev_submerge.ch, markj
Differential Revision:	https://reviews.freebsd.org/D47559
2024-11-26 15:48:36 +01:00
Christos Margiolis
6d4c59e261 sound: Remove PCM_DETACHING(), SD_F_DETACHING and SD_F_DYING
Since SD_F_REGISTERED is cleared at the same time SD_F_DETACHING and
SD_F_DYING are set, and since PCM_DETACHING() is always used in
conjuction with PCM_REGISTERED()/DSP_REGISTERED(), it is enough to just
check SD_F_REGISTERED.

Sponsored by:	The FreeBSD Foundation
MFC after:	2 days
Reviewed by:	dev_submerge.ch, markj
Differential Revision:	https://reviews.freebsd.org/D47463
2024-11-26 15:48:30 +01:00
Christos Margiolis
2839ad58dd sound: Fix hot-unload panics
This patch fixes multiple different panic scenarios occuring during
hot-unload:

1. The channel is unlocked in chn_read()/chn_write() for uiomove(9) and
   in the meantime we enter pcm_killchans() and free it. By the time we
   have returned from userland and try to lock it back, the channel will
   have been freed.
2. The parent channel has been freed in pcm_killchans(), but at the same
   time, some yet-unstopped vchan's chn_read()/chn_write() calls
   chn_start(), which eventually calls vchan_trigger(), which references
   the freed parent.
3. PCM_WAIT() panics because it references a freed PCM lock.

For scenarios 1 and 2, refactor pcm_killchans() to first make sure all
channels have been stopped, and then proceed to free them one by one, as
opposed to freeing the first free channel until all channels have been
freed. This change makes the code more robust, but might introduce some
performance overhead when many channels are allocated, since we
continuously loop through the channel list until all of them are
stopped, and then we loop one last time to free them.

For scenario 3, restructure the code so that we can use destroy_dev(9)
instead of destroy_dev_sched(9) in dsp_destroy_dev(). Because
destroy_dev(9) blocks until all references to the device have went away,
we ensure that the PCM cv and lock will be freed safely.

While here, move the delete_unrhdr(9) calls to pcm_killchans() and
re-order some lines.

Sponsored by:	The FreeBSD Foundation
MFC after:	2 days
Reviewed by:	dev_submerge.ch
Differential Revision:	https://reviews.freebsd.org/D47462
2024-11-26 15:48:24 +01:00
Christos Margiolis
5ac39263d8 sound: Fix chn_trigger() and vchan_trigger() races
Consider the following scenario:

1. CHN currently has its trigger set to PCMTRIG_STOP.
2. Thread A locks CHN, calls CHANNEL_TRIGGER(PCMTRIG_START), sets the
   trigger to PCMTRIG_START and unlocks.
3. Thread B picks up the lock, calls CHANNEL_TRIGGER(PCMTRIG_ABORT) and
   returns a non-zero value, so it returns from chn_trigger() as well.
4. Thread A picks up the lock and adds CHN to the list, which is
   _wrong_, because the last call to CHANNEL_TRIGGER() was with
   PCMTRIG_ABORT, meaning the channel is stopped, yet we are adding it
   to the list and marking it as started.

Another problematic scenario:

1. Thread A locks CHN, sets the trigger to PCMTRIG_ABORT, and unlocks
   CHN. It then locks PCM and _removes_ CHN from the list.
2. In the meantime, since thread A unlocked CHN, thread B has locked it,
   set the trigger to PCMTRIG_START, unlocked it, and is now blocking on
   PCM held by thread A.
3. At the same time, thread C locks CHN, sets the trigger back to
   PCMTRIG_ABORT, unlocks CHN, and is also blocking on PCM. However,
   once thread A unlocks PCM, because thread C is higher-priority than
   thread B, it picks up the PCM lock instead of thread B, and because
   CHN is already removed from the list, and thread B hasn't added it
   back yet, we take a page fault in CHN_REMOVE() by trying to remove a
   non-existent element.

To fix the former scenario, set the channel trigger before the call to
CHANNEL_TRIGGER() (could also come after, doesn't really matter) and
check if anything changed one we lock CHN back.

To fix the latter scenario, use the SAFE variants of CHN_INSERT_HEAD()
and CHN_REMOVE(). A similar scenario can occur in vchan_trigger(), so do
the trigger setting after we've locked the parent channel.

Sponsored by:	The FreeBSD Foundation
MFC after:	2 days
Reviewed by:	dev_submerge.ch
Differential Revision:	https://reviews.freebsd.org/D47461
2024-11-26 15:48:18 +01:00
Christos Margiolis
5bd08172b4 snd_dummy: Fix callout(9) races
Use callout_init_mtx(9) to associate the callback with the driver's
lock. Also make sure the callout is stopped properly during detach.

While here, introduce a dummy_active() function to know when it's
appropriate to stop or not reschedule the callout.

Sponsored by:	The FreeBSD Foundation
MFC after:	2 days
Reviewed by:	dev_submerge.ch, markj
Differential Revision:	https://reviews.freebsd.org/D47459
2024-11-26 15:48:02 +01:00
Kristof Provost
56b7685ae3 pf: handle IPv6 fragmentation for route-to
If a fragmented IPv6 packet hits a route-to rule we have to first prevent
the pf_test(PF_OUT) check in pf_route6() from refragmenting (and calling
ip6_output()/ip6_forward()). We then have to refragment in pf_route6() and
transmit the packets on the route-to interface.

Split pf_refragment6() into two parts, the first to perform the refragmentation,
the second to call ip6_output()/ip6_forward() and call the former from
pf_route6().

Add a test case for route-to-ing fragmented IPv6 packets to verify this works
as expected.

Sponsored by:	Rubicon Communications, LLC ("Netgate")
Differential Revision:	https://reviews.freebsd.org/D47684
2024-11-26 15:06:52 +01:00
Konstantin Belousov
4cc5d081d8 mlx5en: only enable to toggle offload caps if they are supported
Some checks are pending
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, arm64, aarch64) (push) Waiting to run
Reviewed by:	Ariel Ehrenberg <aehrenberg@nvidia.com>
Sponsored by:	NVidia networking
MFC after:	1 week
2024-11-26 14:34:34 +02:00
Konstantin Belousov
cca0dc49e0 mlx5en: move runtime capabilities checks into helper functions
For TLS TX/RX, ratelimit, and IPSEC offload caps.

Reviewed by:	Ariel Ehrenberg <aehrenberg@nvidia.com>
Sponsored by:	NVidia networking
MFC after:	1 week
2024-11-26 14:34:34 +02:00
Michal Meloun
3abef90c32 arm: Fix VFP state corruption during signal delivery
Some checks are pending
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, arm64, aarch64) (push) Waiting to run
D37419 corrupts VFP context store on signal delivery and D38696 corrupts PCB
because it performs a binary copy between structures with different layouts.
Revert the problematic parts of these commits to have signals delivery
working. Unfortunately, there are more problems with these revisions and
more fixes need to be developed.

Fixes: 6926e2699a
Fixes: 4d2427f2c4
MFC after:	4 weeks
2024-11-26 12:18:30 +01:00
Edward Tomasz Napierala
c0a5ee953f hms(4): improve scroll with IICHID_SAMPLING
The current quirk is designed to discard duplicated data read from
the chip.  Problem is, it also discards real events when they happen
to be identical, which is the case with scroll wheel events;
differently from X/Y they always move by fixed offset.  This results
in two-finger scroll that would stop mid-way that could be fixed by
manually setting dev.hms.0.drift_thresh to 0.

To fix that, don't discard duplicates when there's wheel movement.
For users with actual duplicates problem this will result in scroll
suddenly becoming quite inertial, but it will stop moving at any touch,
so shouldn't be terrible.

PR:		kern/276709
Reviewed By:	wulf
Differential Revision:	https://reviews.freebsd.org/D47640
2024-11-26 10:28:51 +00:00
Kyle Evans
ccb973da1f kern: restore signal mask before ast() for pselect/ppoll
Some checks are pending
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, arm64, aarch64) (push) Waiting to run
It's possible to take a signal after pselect/ppoll have set their return
value, but before we actually return to userland.  This results in
taking a signal without reflecting it in the return value, which weakens
the guarantees provided by these functions.

Switch both to restore the signal mask before we would deliver signals
on return to userland.  If a signal was received after the wait was
over, then we'll just have the signal queued up for the next time it
comes unblocked.  The modified signal mask is retained if we were
interrupted so that ast() actually handles the signal, at which point
the signal mask is restored.

des@ has a test case demonstrating the issue in D47738 which will
follow.

Note for MFC: TDA_PSELECT is a KBI break, we should just inline
ast_sigsuspend() in pselect/ppoll for stable branches.  It's not exactly
the same, but it will be close enough.

Reported by:	des
Reviewed by:	des (earlier version), kib
Sponsored by:	Klara, Inc.
Sponsored by:	NetApp, Inc.
Differential Revision:	https://reviews.freebsd.org/D47741
2024-11-25 22:04:48 -06:00
Kyle Evans
cab31f5633 kern: add a TDA_PSELECT for early restoration of sigmask
It may be the case that we want to avoid delivering signals that are
normally blocked by the thread's signal mask, in which case the syscall
should schedule this one instead to restore the mask prior to delivery.

This will be used by pselect/ppoll to avoid delivering signals that were
supposed to be blocked after the timeout has elapsed.  The name was
chosen as this is the expected behavior of pselect/ppoll, while late
restoration of the mask is exceptional behavior for these specific
calls.

__FreeBSD_version bump as later TDA_* values have changed, third-party
modules that may be using MOD3/MOD4 need to be rebuilt.

Reviewed by:	kib
Sponsored by:	Klara, Inc.
Sponsored by:	NetApp, Inc.
Differential Revision:	https://reviews.freebsd.org/D47741
2024-11-25 22:04:48 -06:00
Jason A. Harmening
5035db222e amdiommu: Fix device table segment base register offsets
Segment base registers are at 8-byte intervals, while the register
write helper takes a byte-aligned offset.  This fixes
DEV_TAB_HARDWARE_ERROR events and associated peripheral I/O failures
on an Epyc-based system with 8-segment device tables.

Reviewed by:		kib
Differential Revision:	https://reviews.freebsd.org/D47752
2024-11-25 21:40:45 -06:00
Cy Schubert
501c4801ed truss: Fix grammar in error messages
Some checks are pending
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, arm64, aarch64) (push) Waiting to run
MFC after:	3 days
2024-11-25 14:52:35 -08:00
David Gilbert
3a212cc66a sbin/{ffsinfo,mount,newfs}: reference ffs(4) in man pages
Some checks are pending
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, arm64, aarch64) (push) Waiting to run
PR:		282867
MFC:		stable/14
Approved by:	mhorne (via IRC)
2024-11-25 22:57:20 +01:00
Mark Johnston
73465bb47b savecore: Add a livedump regression test
MFC after:	2 weeks
Differential Revision:	https://reviews.freebsd.org/D47715
2024-11-25 21:12:51 +00:00
Mark Johnston
37cef00192 livedump: Silence KASAN and KMSAN when livedumping
The livedumper triggers reports from both of these sanitizers since it
necessarily accesses uninitialized or freed memory.  Add a flag to
silence reports from both sanitizers.

Reviewed by:	mhorne, khng
MFC after:	2 weeks
Differential Revision:	https://reviews.freebsd.org/D47714
2024-11-25 21:12:51 +00:00
Mitchell Horne
e9fa399180 riscv: T-HEAD early locore workaround
The T-HEAD custom PTE bits are defined in such a way that the
default/normal memory type is non-zero value. This _unthoughtful_ choice
means that, unlike the Svpbmt and non-Svpbmt cases, this field cannot be
left bare in our bootstrap PTEs, or the hardware will fail to proceed
far enough in boot (cache strangeness). On the other hand, we cannot
unconditionally apply the PTE_THEAD_MA_NONE attributes, as this is not
compatible with spec-compliant RISC-V hardware, and will result in a
fatal exception.

Therefore, in order to handle this errata, we are forced to perform a
check of the CPU type at the first moment possible. Do so, and fix up
the PTEs with the correct memory attribute bits in the T-HEAD case.

Sponsored by:	The FreeBSD Foundation
Differential Revision:	https://reviews.freebsd.org/D47458
2024-11-25 17:08:04 -04:00
Mitchell Horne
c7fa232e9b locore.S: stash boot arguments in saved registers
Switch the boot argument registers to the unused s3 and s4. This ensures
the values will not be clobbered by SBI or function calls; they are
consumed late in the assembly routine.

Sponsored by:	The FreeBSD Foundation
Differential Revision:	https://reviews.freebsd.org/D47457
2024-11-25 17:08:04 -04:00
Mitchell Horne
ccbe9a9f73 riscv: T-HEAD PBMT support
T-HEAD CPUs provide a spec-violating implementation of page-based memory
types, using PTE bits [63:59]. Add basic support for this "errata",
referred to in some places as an "extension".

Note that this change is not enough on its own, but a workaround is
needed for the bootstrap (locore) page tables as well.

Reviewed by:	jhb
Sponsored by:	The FreeBSD Foundation
Differential Revision:	https://reviews.freebsd.org/D45472
2024-11-25 17:08:04 -04:00
Mitchell Horne
dfe57951f0 riscv: add custom T-HEAD dcache ops
This is the first major quirk we need to support in order to run on
current T-HEAD/XuanTie CPUs, e.g. the C906 or C910, found in several
existing RISC-V SBCs. With these custom dcache routines installed,
busdma can reliably communicate with devices which are not coherent
w.r.t. the CPU's data caches.

This patch introduces the first quirk/errata handling functions to
identcpu.c, and thus is forced to make some decisions about how this
code is structured. It will be amended with the changes that follow in
the series, yet I feel the final result is (unavoidably) somewhat
clumsy. I expect the CPU identification code will continue to evolve as
more CPUs and their quirks are eventually supported.

Discussed with:	jrtc27
Sponsored by:	The FreeBSD Foundation
Differential Revision:	https://reviews.freebsd.org/D47455
2024-11-25 17:08:04 -04:00
Mitchell Horne
4ab2a84e09 riscv: dcache flush hooks
Cache management operations were, for a long time, unspecified by the
RISC-V ISA, and thus these functions have been no-ops. To cope, hardware
with non-coherent I/O has implemented custom cache flush mechanisms,
either in the form of custom instructions or special device registers.
Additionally, the RISC-V CMO extension is ratified and these official
instructions will start to show up in hardware eventually. Therefore, a
method is needed to select the dcache management routines at runtime.

Add a simple set of function hooks, as well as a routine to install them
and specify the minimum dcache line size. The first consumer will be the
non-standard cache management instructions for T-HEAD CPUs.

The unused I-cache variables and macros are removed.

Sponsored by:	The FreeBSD Foundation
Differential Revision:	https://reviews.freebsd.org/D47454
2024-11-25 17:08:03 -04:00
John Baldwin
bf06074106 ccr(4): Belatedly bump .Dd for prior commit
Some checks are pending
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, arm64, aarch64) (push) Waiting to run
Pointy hat to:	jhb
2024-11-25 15:16:09 -05:00
John Baldwin
370ad2d367 ccr(4): Mention geli(4) and ktls(4) as other consumers
Cross reference crypto(7) and crypto(9) as well.

Sponsored by:	Chelsio Communications
2024-11-25 14:59:36 -05:00
Konstantin Belousov
bde575b273 kern___realpathat(): honor uio_seg argument
Reviewed by:	markj
Sponsored by:	The FreeBSD Foundation
MFC after:	1 week
Differential revision:	https://reviews.freebsd.org/D47739
2024-11-25 21:36:41 +02:00
Konstantin Belousov
67218bcea8 kern___realpathat(): do not copyout past end of string
Reported and tested by:	pho
Reviewed by:	markj
Sponsored by:	The FreeBSD Foundation
MFC after:	1 week
Differential revision:	https://reviews.freebsd.org/D47739
2024-11-25 21:36:41 +02:00
Konstantin Belousov
31784ee1e3 kern___realpathat(): style
Reviewed by:	markj
Sponsored by:	The FreeBSD Foundation
MFC after:	3 days
Differential revision:	https://reviews.freebsd.org/D47739
2024-11-25 21:36:41 +02:00
John Baldwin
af1ef35a00 RELNOTES: Document that ktls is now enabled by default 2024-11-25 13:54:25 -05:00
John Baldwin
b2f7c53430 ktls: Enable by default
Reviewed by:	gallatin, markj
Sponsored by:	Chelsio Communications
Differential Revision:	https://reviews.freebsd.org/D47735
2024-11-25 13:50:27 -05:00
Gleb Smirnoff
67f9307907 mlx5e tls: use non-sleeping malloc flag is it was intended
Reviewed by:	gallatin
Fixes:		81b38bce07
2024-11-25 10:46:13 -08:00
Cy Schubert
8585680682 Revert "rc.d/var_run: Fix typo in comment"
svcj is not a typo.

Noted by:	jlduran
MFC after:	3 days

This reverts commit bef05a7537.
2024-11-25 10:43:54 -08:00
Wolfram Schneider
fb4cdd5160 fhreadlink.2: fix old typo in the manpage
PR: 282967
Approved by: kib
2024-11-25 18:38:20 +00:00
Kevin Bowling
c1e304c60c setsockopt.2: Clarify SO_SPLICE action
Reviewed by:	gallatin, markj
MFC after:	3 days
Sponsored by:	Netflix
Differential Revision:	https://reviews.freebsd.org/D47720
Co-authored-by:	Mark Johnston <markj@FreeBSD.org>
2024-11-25 11:36:00 -07:00
Andrey V. Elsukov
c94d6389e4 ipsec: fix IPv6 over IPv4 tunneling.
Some checks are pending
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, arm64, aarch64) (push) Waiting to run
Properly initialize setdf variable in ipsec_encap().
It is used for AF_INET6 case when IPv6 datagram is going to be
encapsulated into IPv4 datagram.

PR:		282535
Fixes:		4046178557
MFC after:	1 week
2024-11-25 20:42:00 +03:00
Cy Schubert
4d58cf6ff9 rc.d/var_run: Add missing $(dirname)
We intend to create the containing directory here. Fix this typo.

PR:		282939
MFC after:	3 days
2024-11-25 09:16:59 -08:00
Cy Schubert
bef05a7537 rc.d/var_run: Fix typo in comment 2024-11-25 09:10:13 -08:00
Stefan Eßer
aa308b49e3 mtree/BSD.tests.dist: remove entry for OpenBSD dc command
The OpenBSD derived dc program has been removed in commit 8ea6c11540,
but the creation of a directory for tests of this program had not been
disabled in that commit.

Reported by:	kevans
2024-11-25 16:57:17 +01:00
John Baldwin
73b42eff25 rc.conf: Update commented examples for lo0 to use CIDR
Some checks are pending
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, arm64, aarch64) (push) Waiting to run
In particular, a bare IP address no longer works.

Reviewed by:	bz, imp, emaste
Differential Revision:	https://reviews.freebsd.org/D47716
2024-11-25 10:14:33 -05:00
Konstantin Belousov
6ec4ff7088 amd64: switch pmap_map_io_transient() to use pmap_kenter_attr()
Some checks are pending
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, arm64, aarch64) (push) Waiting to run
instead of constructing transient pte itself.  This pre-set PG_A and
PG_M bits, avoiding atomic pte update on access and modification.  Also
it set the nx bit, the mapping is not supposed to be used for executing.

Reviewed by:	alc
Sponsored by:	The FreeBSD Foundation
MFC after:	1 week
Differential revision:	https://reviews.freebsd.org/D47717
2024-11-25 14:16:50 +02:00
Konstantin Belousov
2d6923790b amd64 pmap: assert and explain why pmap_qremove() is safe WRT supermappings
Based on alc@ comments from https://reviews.freebsd.org/D47678.

Reviewed by:	alc
Sponsored by:	The FreeBSD Foundation
MFC after:	1 week
Differential revision:	https://reviews.freebsd.org/D47717
2024-11-25 14:16:50 +02:00
Wolfram Schneider
aebac84982 manpage: cross link fhreadlink(2) <-> readlink(2)
Some checks are pending
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, arm64, aarch64) (push) Waiting to run
2024-11-25 09:02:34 +00:00
Doug Moore
ff4c19bb54 vm_page: pass page to iter_free
Pass the to-be-freed page to vm_page_iter_free as a parameter, rather
than computing it from the iterator parameter, to improve performance.

Sort declarations of page_iter functions in vm_page.h.

Reviewed by:	alc
Differential Revision:	https://reviews.freebsd.org/D47727
2024-11-25 02:03:34 -06:00
Stefan Eßer
8ea6c11540 usr.bin/bc: remove OpenBSD derived bc and dc commands
Some checks are pending
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, arm64, aarch64) (push) Waiting to run
In 2020, an improved implementation of the bc and dc commands
developed by Gavin D. Howard has been imported into FreeBSD.
It has replaced the OpenBSD-derived versions of these commands
in all currently supported FreeBSD releases.

The OpenBSD versions could still be built using the WITHOUT_GH_BC
option. There have been no reports of problems or unexpected
deviations from the OpenBSD version for some time, therefore
keeping the OpenBSD version is no longer required in FreeBSD.

This commit removes the option to build the OpenBSD version and
corresponding source files from -CURRENT. No MFC is planned, all
currently released FreeBSD versions should retain the build option.

The WITHOUT_GH_BC option is no longer accepted and will cause
make buildworld to fail.

Reviewed by:	des, emaste
Approved by:	des
Relnotes:	yes
Differential Revision:	https://reviews.freebsd.org/D46876
2024-11-24 22:38:23 +01:00
Kevin Bowling
e80419da6c igc: disable hw.igc.sbp
Similar to 548d8a131d in e1000, disable this by default.

MFC after:	3 days
Sponsored by:	BBOX.io
2024-11-24 14:08:54 -07:00
Rick Macklem
0347ddf41f nfs_commonsubs.c: Make all upper case user domain work
Without this patch, an all upper case user domain name
(as specified by nfsuserd(8)) would not work.
I believe this was done so that Kerberos realms were
not confused with user domains.

Now, RFC8881 specifies that the user domain name is a
DNS name.  As such, all upper case names should work.

This patch fixes this case so that it works.  The custom
comparison function is no longer needed.

PR:	282620
Tested by:	jmmv
MFC after:	2 weeks
2024-11-24 12:47:56 -08:00
Martin Matuska
718519f4ef zfs: merge openzfs/zfs@d0a91b9f8
Some checks are pending
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, arm64, aarch64) (push) Waiting to run
Notable upstream pull request merges:
 #16643 -multiple Change rangelock handling in FreeBSD's zfs_getpages()
 #16697 46c4f2ce0 dsl_dataset: put IO-inducing frees on the pool deadlist
 #16740 -multiple BRT: Rework structures and locks to be per-vdev
 #16743 a60ed3822 L2ARC: Move different stats updates earlier
 #16758 8dc452d90 Fix some nits in zfs_getpages()
 #16759 534688948 Remove hash_elements_max accounting from DBUF and ARC
 #16766 9a81484e3 ZAP: Reduce leaf array and free chunks fragmentation
 #16773 457f8b76e BRT: More optimizations after per-vdev splitting
 #16782 0ca82c568 L2ARC: Stop rebuild before setting spa_final_txg
 #16785 d76d79fd2 zio: Avoid sleeping in the I/O path
 #16791 ae1d11882 BRT: Clear bv_entcount_dirty on destroy
 #16796 b3b0ce64d FreeBSD: Lock vnode in zfs_ioctl()
 #16797 d0a91b9f8 FreeBSD: Reduce copy_file_range() source lock to shared

Obtained from:	OpenZFS
OpenZFS commit:	d0a91b9f88
2024-11-24 10:04:51 +01:00
Kristof Provost
a46c121db4 netpfil tests: make dummynet tests more robust
These tests try to verify that packet prioritisation works as expected. This is
inherently a statistical process, and is difficuly to measure automatically.
Run the tests more times and accept more failures.

Sponsored by:	Rubicon Communications, LLC ("Netgate")
2024-11-24 09:34:09 +01:00
Kevin Bowling
7390daf87c e1000: Style txrx
Fix up indentation and reflow long lines.

MFC after:	3 days
Sponsored by:	BBOX.io
2024-11-24 00:39:43 -07:00
Kevin Bowling
c7fb7b5d9f igc: Style pass igc_txrx
Fix up indentation and reflow long lines.

MFC after:	3 days
Sponsored by:	BBOX.io
2024-11-24 00:27:12 -07:00
Kevin Bowling
c58d34dd67 ixgbe: Style pass on FreeBSD part of driver
Fix up some indentation and reflow long lines

MFC after:	3 days
Sponsored by:	BBOX.io
2024-11-24 00:21:37 -07:00
Kevin Bowling
9efc7325f1 igc: Reflow long lines
Some checks are pending
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, arm64, aarch64) (push) Waiting to run
MFC after:	3 days
Sponsored by:	BBOX.io
2024-11-23 22:58:13 -07:00
Kevin Bowling
6f14883066 e1000: Style pass on if_em
Fix up some indentation and reflow long lines

MFC after:	3 days
Sponsored by:	BBOX.io
2024-11-23 22:45:52 -07:00
Kevin Bowling
d1bb1a5011 igc: Normalize indentation a bit
Some checks are pending
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, arm64, aarch64) (push) Waiting to run
MFC after:	3 days
Sponsored by:	BBOX.io
2024-11-23 20:17:27 -07:00
Kevin Bowling
bceec3d80a e1000: Try auto-negotiation for fixed 100 or 10 configuration
Some checks are pending
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, arm64, aarch64) (push) Waiting to run
This is a retread of https://reviews.freebsd.org/D34449 which I think
will fix the issue for the remote side not supporting autoneg.  We now
attempt an autoneg, and if that fails fall back to the current code
that forces the link speed/duplex.

The original intent of this patch is to inform the remote switch of
duplex settings when we (the client) are specifying a fixed 10 or 100
speed.  Otherwise it may get the duplex setting wrong.

The tricky case is when the remote (switch) side is fixing its
speed AND duplex while disabling autoneg and we (client) need to do
the same, which still seems to be common enough at some ISPs.

Original commit message follows:
Currently if an e1000 interface is set to a fixed media configuration,
for gigabit, it will participate in auto-negotiation as required by
IEEE 802.3-2018 Clause 37. However, if set to fixed media configuration
for 100 or 10, it does NOT participate in auto-negotiation.

By my reading of Clauses 28 and 37, while auto-negotiation is optional
for 100 and 10, it is not prohibited and is, in fact, "highly
recommended".

This patch enables auto-negotiation for fixed 100 and 10 media
configuration, in a similar manner to that already performed for 1000.
I.e., the patch enables advertising of just the manually configured
settings with the goal of allowing the remote end to match the manually
configured settings if it has them available.

To be clear, this patch does NOT allow an em(4) interface that has been
manually configured with specific media settings to respond to
auto-negotiation by then configuring different parameters to those that
were manually configured. The intent of this patch is to fully comply
with the requirements of Clause 37, but for 100 and 10.

The need for this has arisen on an em(4) link where the other end is
under a different administrative control and is set to full
auto-negotiation. Due to the cable length GigE is not working well. It
is desired to set the em(4) end to "media 100baseTX mediatype
full-duplex" which does work when both ends are configured that way.
Currently, because em(4) does not participate in autoneg for this
setting, the remote defaults to half-duplex - i.e., there's a duplex
mismatch and things don't work. With this patch, em(4) would inform the
remote that it has only 100baseTX full, the remote would match that and
it will work.

Tested by:	Natalino Picone <natalino.picone@nozominetworks.com>
Tested by:	Franco Fichtner <franco@opnsense.org>
Tested by:	J.R. Oldroyd <fbsd@opal.com> (previous version)
Sponsored by:	Nozomi Networks
Sponsored by:	BBOX.io
Differential Revision:	https://reviews.freebsd.org/D47336
2024-11-23 17:18:37 -07:00
Alexander Motin
d0a91b9f88
FreeBSD: Reduce copy_file_range() source lock to shared
Linux locks copy_file_range() source as shared.  FreeBSD was doing
it also, but then was changed to exclusive, partially because KPI
of that time was doing so, and partially seems out of caution.
Considering zfs_clone_range() uses range locks on both source and
destination, neither should require exclusive vnode locks. But one
step at a time, just sync it with Linux for now.

Reviewed-by: Alan Somers <asomers@gmail.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #16789
Closes #16797
2024-11-23 14:29:03 -08:00
Alexander Motin
b3b0ce64d5
FreeBSD: Lock vnode in zfs_ioctl()
Previously vnode was not locked there, unlike Linux.  It required
locking it in vn_flush_cached_data(), which recursed on the lock
if called from zfs_clone_range(), having the vnode locked.

Reviewed-by: Alan Somers <asomers@gmail.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Sponsored by: iXsystems, Inc.
Closes #16789
Closes #16796
2024-11-23 14:26:52 -08:00
Thomas Eberhardt
b4ede68c21 find: Correct ls(1) equivalent command for -ls primary
Some checks are pending
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, arm64, aarch64) (push) Waiting to run
After commit 3bfbb521fe, -g stopped being a no-op.  The -g hasn't
been required for equivalent output since 4.4BSD.

PR:		282901
Fixes:		3bfbb521 ls: Improve POSIX compatibility for -g and -n.
2024-11-23 11:45:27 -05:00
John Baldwin
b4c700fa7c new-bus: Fix some shortcomings in disabling devices via hints
A device can be disabled via a hint after it is probed (but before it
is attached).  The initial version of this marked the device disabled,
but left the device "alive" meaning that dev->driver and dev->desc
were untouched and still pointed into the driver that probed the
device.  If that driver lives in a kernel module that is later
unloaded, device_detach() called from devclass_delete_driver() doesn't
do anything (the device's state is DS_ALIVE).  In particular, it
doesn't call device_set_driver(dev, NULL) to disassociate the device
from the driver that is being unloaded.

There are several places where these stale pointers can be tripped
over.  After kldunload, invoking the sysctl to fetch device info can
dereference dev->desc and dev->driver causing panics.  Even without
kldunload, a system suspend request will call the device_suspend and
device_resume DEVMETHODs of the driver in question even though the
device is not attached which can cause some excitement.

To clean this up, more fully detach a device that is disabled by a
hint by clearing the driver and setting the state to DS_NOTPRESENT.
However, to keep the device name+unit combination reserved, leave the
device attached to its devclass.

This requires a change to 'devctl enable' handling to deal with this
updated state.  It now checks for a non-NULL devclass to determine if
a disabled device is in this state and if so it clears the hint.
However, it also now clears the devclass before attaching the device.
This gives all drivers an opportunity to attach to the now-enabled
device.

Reported by:	adrian
Discussed with:	imp
Reviewed by:	imp
MFC after:	1 week
Differential Revision:	https://reviews.freebsd.org/D47691
2024-11-23 11:39:02 -05:00
Ka Ho Ng
9ad8116b54 kerneldump: fix incorrect SETSIZE to BIT_COPY_STORE_REL when livedump
Also replace malloc/free with BITSET_ALLOC/BITSET_FREE macros.

Sponsored by:	Juniper Networks, Inc.
MFC after:	1 week
Reviewed by:	markj
Differential Revision:	https://reviews.freebsd.org/D47708
2024-11-23 16:35:07 +00:00
Poul-Henning Kamp
7749de2440 Add new kern.vt.slow_down tunable.
Some checks are pending
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, arm64, aarch64) (push) Waiting to run
On a laptop with no other console devices than the screen, things
scroll of the screen faster than eye or camera can capture it.

This tunable slows the console down and makes it update synchronously,
so console output continues when timers or interrupts do not.

Differential Revision:  https://reviews.freebsd.org/D47710
2024-11-23 15:01:09 +00:00
Andrey V. Elsukov
e012d79c9c ipfw: fix order of memcpy arguments.
Some checks are pending
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, arm64, aarch64) (push) Waiting to run
This fixes `ipfw table N lookup addr` command for MAC tables.

MFC after:	1 week
2024-11-23 15:52:43 +03:00
Ariel Ehrenberg
253a1fa16b mlx5: Fix handling of port_module_event
Some checks are pending
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, arm64, aarch64) (push) Waiting to run
Remove the array of port module status and instead save module status
and module number.

At boot, for each PCI function driver get event from fw about module
status. The event contains module number and module status. Driver
stores module number and module status..  When user (ifconfig) ask for
modules information, for each pci function driver first queries fw to
get module number of current pci function, then driver compares the
module number to the module number it stored before and if it matches
and module status is "plugged and enabled" then driver queries fw for
the eprom information of that module number and return it to the
caller.

In fact fw could have concluded that required module number of the
current pci function, but fw is not implemented this way. current
design of PRM/FW is that MCIA register handling is only aware of
modules, not the pci function->module connections.  FW is designed to
take the module number written to MCIA and write/read the content
to/from the associated module's EPROM.

So, based on current FW design, we must supply the module num so fw
can find the corresponding I2C interface of the module to write/read.

Sponsored by:	NVidia networking
MFC after:	1 week
2024-11-23 12:59:26 +02:00
Konstantin Belousov
0d38b0bc8f mlx5en: fix the sign of mlx5e_tls_st_init() error, convert from Linux to BSD
Sponsored by:	NVidia networking
MFC after:	1 week
2024-11-23 12:09:50 +02:00
Konstantin Belousov
7fbc896e28 vm_page.c: remove transiently defined vm_page_free_toq_impl() prototype
Sponsored by:	The FreeBSD Foundation
2024-11-23 12:02:00 +02:00
Konstantin Belousov
64bf5a431c mlx5_en: style function prototype
Sponsored by:	NVidia networking
MFC after:	2 weeks
2024-11-23 12:01:50 +02:00
Andrew Gallatin
81b38bce07 mlx5e tls: Ensure all allocated tags have a hw context associated
Ensure all allocated tags have a hardware context associated.
The hardware context allocation is moved into the zone import
routine, as suggested by kib.  This is safe because these zone
allocations are always done in a sleepable context.

I have removed the now pointless num_resources tracking,
and added sysctls / tunables to control UMA zone limits
for these tls tags, as well as a tunable to let the
driver pre-allocate tags at boot.

MFC after:	2 weeks
2024-11-23 12:01:50 +02:00
Mark Johnston
fdeb273d49 dtrace: Add some more annotations for KMSAN
Some checks are pending
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, arm64, aarch64) (push) Waiting to run
- Don't allow FBT and kinst to instrument the KMSAN runtime.
- When fetching data from the traced thread's stack, mark it as
  initialized.  It may well be uninitialized, but as dtrace permits
  arbitrary inspection of kernel memory, it isn't very useful to raise
  KMSAN reports.
- Mark data copied in from userspace as initialized, as we do for
  copyin() etc. using interceptors.

MFC after:	2 weeks
2024-11-23 02:36:08 +00:00
Mark Johnston
1905ce3a6b dtrace: Remove an unused typedef
No functional change intended.

MFC after:	1 week
2024-11-23 02:36:08 +00:00
Kristof Provost
e0bf7bc3b2 pf: reduce indentation level in pf_dummynet_route()
Some checks are pending
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, arm64, aarch64) (push) Waiting to run
Reverse the first if() in pf_dummynet_route() to avoid an unneeded level of
indendation.

No functional change.

Sponsored by:	Rubicon Communications, LLC ("Netgate")
2024-11-23 00:32:04 +01:00
Ed Maste
6643965998 getentropy: restore unistd.h include
Some checks are pending
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-14, /usr/lib/llvm-14/bin, ubuntu-22.04, bmake libarchive-dev clang-14 lld-14, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /opt/homebrew/opt/llvm@18/bin, macos-latest, bmake libarchive llvm@18, arm64, aarch64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, amd64, amd64) (push) Waiting to run
Cross-build Kernel / ${{ matrix.target_arch }} ${{ matrix.os }} (${{ matrix.compiler }}) (clang-18, /usr/lib/llvm-18/bin, ubuntu-24.04, bmake libarchive-dev clang-18 lld-18, arm64, aarch64) (push) Waiting to run
It is needed for SSP support.

Reported by: netchild, Shawn Webb
Fixes: 62dab3d016 ("getentropy: Remove fallback code")
Sponsored by: The FreeBSD Foundation
2024-11-22 13:08:41 -05:00
Pavel Snajdr
38c0324c0f
Linux: Fix zfs_prune panics
by protecting against sb->s_shrink eviction on umount with newer kernels

deactivate_locked_super calls shrinker_free and only then
sops->kill_sb cb, resulting in UAF on umount when trying
to reach for the shrinker functions in zpl_prune_sb of
in-umount dataset

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Adam Moss <c@yotes.com>
Signed-off-by: Pavel Snajdr <snajpa@snajpa.net>
Closes #16770
2024-11-21 15:30:43 -08:00
Alexander Motin
ae1d11882d
BRT: Clear bv_entcount_dirty on destroy
This fixes assertion in brt_sync_table() on debug builds when last
cloned block on the vdev is freed and bv_meta_dirty is cleared,
while bv_entcount_dirty is not.  Should not matter in production.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #16791
2024-11-21 08:20:22 -08:00
Brian Behlendorf
225e76cd7d
Linux 6.12 compat: META
Update the META file to reflect compatibility with the 6.12 kernel.

Reviewed-by: Umer Saleem <usaleem@ixsystems.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #16793
2024-11-21 07:56:30 -08:00
Alexander Motin
9a81484e35
ZAP: Reduce leaf array and free chunks fragmentation
Previous implementation of zap_leaf_array_free() put chunks on the
free list in reverse order.  Also zap_leaf_transfer_entry() and
zap_entry_remove() were freeing name and value arrays in reverse
order.  Together this created a mess in the free list, making
following allocations much more fragmented than necessary.

This patch re-implements zap_leaf_array_free() to keep existing
chunks order, and implements non-destructive zap_leaf_array_copy()
to be used in zap_leaf_transfer_entry() to allow properly ordered
freeing name and value arrays there and in zap_entry_remove().

With this change test of some writes and deletes shows percent of
non-contiguous chunks in DDT reducing from 61% and 47% to 0% and
17% for arrays and frees respectively.  Sure some explicit sorting
could do even better, especially for ZAPs with variable-size arrays,
but it would also cost much more, while this should be very cheap.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #16766
2024-11-20 13:37:52 -08:00
tleydxdy
d02257c280
fix: block incompatible kernel from being installed
The current "Requires" lines only ensure the old kernel is
available on the system but it does not prevent fedora from
updating to an incompatible and breaking user's system.

Set Conflicts to block incompatible kernels from being installed.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: tleydxdy <shironeko.github@tesaguri.club>
Closes #16139
2024-11-20 13:19:07 -08:00
Mark Johnston
d76d79fd27
zio: Avoid sleeping in the I/O path
zio_delay_interrupt(), apparently used for fault injection, is executed
in the I/O pipeline.  It can cause the calling thread to go to sleep,
which is not allowed on FreeBSD.  This happens only for small delays,
though, and there's no apparent reason to avoid deferring to a taskqueue
in that case, as it already does otherwise.

Simply go to sleep unconditionally.  This fixes an occasional panic I
see when running the ZTS on FreeBSD.  Also remove an unhelpful comment
referencing the non-existent timeout_generic().

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by:  Alexander Motin <mav@FreeBSD.org>
Signed-off-by: Mark Johnston <markj@FreeBSD.org>
Closes #16785
2024-11-20 08:23:08 -08:00
Alexander Motin
457f8b76e7
BRT: More optimizations after per-vdev splitting
- With both pending and current AVL-trees being per-vdev and having
effectively identical comparison functions (pending tree compared
also birth time, but I don't believe it is possible for them to be
different for the same offset within one transaction group), it
makes no sense to move entries from one to another.  Instead inline
dramatically simplified brt_entry_addref() into brt_pending_apply().
It no longer requires bv_lock, since there is nothing concurrent
to it at the time.  And it does not need to search the tree for the
previous entries, since it is the same tree, we already have the
entry and we know it is unique.
 - Put brt_vdev_lookup() and brt_vdev_addref() into different tree
traversals to avoid false positives in the first due to the second
entcount modifications.  It saves dramatic amount of time when a
file cloned first time by not looking for non-existent ZAP entries.
 - Remove avl_is_empty(bv_tree) check from brt_maybe_exists().  I
don't think it is needed, since by the time all added entries are
already accounted in bv_entcount. The extra check must be producing
too many false positives for no reason.  Also we don't need bv_lock
there, since bv_entcount pointer must be table at this point, and
we don't care about false positive races here, while false negative
should be impossible, since all brt_vdev_addref() have already
completed by this point.  This dramatically reduces lock contention
on massive deletes of cloned blocks.  The only remaining one is
between multiple parallel free threads calling brt_entry_decref().
 - Do not update ZAP if net change for a block over the TXG was 0.
In combination with above it makes file move between datasets as
cheap operation as originally intended if it fits into one TXG.
 - Do not allocate vdevs on pool creation or import if it did not
have active block cloning. This allows to save a bit in few cases.
 - While here, add proper error handling in brt_load() on pool
import instead of assertions.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #16773
2024-11-20 06:20:14 -08:00
Brian Behlendorf
49a377aa30
ZTS: Fix zpool_status_008_pos false positive
Increase the injected delay to 1000ms and the ZIO_SLOW_IO_MS threshold
to 750ms to avoid false positives due to unrelated slow IOs which may
occur in the CI environment.  Additionally, clear the fault injection as
soon as it is no longer required for the test case.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #16769
2024-11-20 06:13:32 -08:00
Alexander Motin
0ca82c5680
L2ARC: Stop rebuild before setting spa_final_txg
Without doing that there is a race window on export when history
log write by completed rebuild dirties transaction beyond final,
triggering assertion.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: George Amanakis <gamanakis@gmail.com>
Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Sponsored by: iXsystems, Inc.
Closes #16714
Closes #16782
2024-11-20 06:11:51 -08:00
Alexander Motin
534688948c
Remove hash_elements_max accounting from DBUF and ARC
Those values require global atomics to get current hash_elements
values in few of the hottest code paths, while in all the years I
never cared about it.  If somebody wants, it should be easy to
get it by periodic sampling, since neither ARC header nor DBUF
counts change so fast that it would be difficult to catch.

For now I've left hash_elements_max kstat for ARC, since it was
used/reported by arc_summary and it would break older versions,
but now it just reports the current value.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #16759
2024-11-19 07:00:16 -08:00
Rob Norris
ffe2112795
Move "no name changes" from compression to checksum table
Compression names actually aren't used in dedup table names, but
checksum names are.

Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Closes #16776
2024-11-19 06:55:27 -08:00
Steve Mokris
e08e832b10
Expand zpool-remove.8 manpage with example results
Also fix comment cross-referencing to zpool.8.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by:  Alexander Motin <mav@FreeBSD.org>
Signed-off-by: Steve Mokris <smokris@softpixel.com>
Closes #16777
2024-11-19 06:52:04 -08:00
Alexander Motin
0d6306be8c
Fix few __VA_ARGS typos in assertions
It should be __VA_ARGS__, not __VA_ARGS.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Rob Norris <robn@despairlabs.com>
Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Sponsored by: iXsystems, Inc.
Closes #16780
2024-11-19 06:48:06 -08:00
Ameer Hamza
ff3df1211c
zed: prevent automatic replacement of offline vdevs
When an OFFLINE device is physically removed, a spare is automatically
activated. However, this behavior differs in FreeBSD, where we do not
transition from OFFLINE state to REMOVED.
Our support team has encountered cases where customers experienced
unexpected behavior during drive replacements, with multiple spares
activating for the same VDEV due to a single disk replacement. This
patch ensures that a drive in an OFFLINE state remains in that state,
preventing it from transitioning to REMOVED and being automatically
replaced by a spare.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Signed-off-by: Ameer Hamza <ahamza@ixsystems.com>
Closes #16751
2024-11-15 15:08:16 -08:00
Alexander Motin
fd6e8c1d2a BRT: Rework structures and locks to be per-vdev
While block cloning operation from the beginning was made per-vdev,
before this change most of its data were protected by two pool-
wide locks.  It created lots of lock contention in many workload.

This change makes most of block cloning data structures per-vdev,
which allows to lock them separately.  The only pool-wide lock now
it spa_brt_lock, protecting array of per-vdev pointers and in most
cases taken as reader.  Also this splits per-vdev locks into three
different ones: bv_pending_lock protects the AVL-tree of pending
operations in open context, bv_mos_entries_lock protects BRT ZAP
object from while being prefetched, and bv_lock protects the rest
of per-vdev context during TXG commit process.  There should be
no functional difference aside of some optimizations.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Pawel Jakub Dawidek <pjd@FreeBSD.org>
Reviewed-by: Brian Atkinson <batkinson@lanl.gov>
Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Sponsored by: iXsystems, Inc.
Closes #16740
2024-11-15 15:04:11 -08:00
Alexander Motin
309ce6303f ZAP: Add by_dnode variants to lookup/prefetch_uint64
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Pawel Jakub Dawidek <pjd@FreeBSD.org>
Reviewed-by: Brian Atkinson <batkinson@lanl.gov>
Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Sponsored by: iXsystems, Inc.
Closes #16740
2024-11-15 15:04:02 -08:00
Alexander Motin
1ee251bdde BRT: Don't call brt_pending_remove() on holes/embedded
We are doing exactly the same checks around all brt_pending_add().

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Pawel Jakub Dawidek <pjd@FreeBSD.org>
Reviewed-by: Brian Atkinson <batkinson@lanl.gov>
Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Sponsored by: iXsystems, Inc.
Closes #16740
2024-11-15 15:03:57 -08:00
Alexander Motin
483087b06f ZTS: Avoid embedded blocks in bclone/bclone_prop_sync
If we write less than 113 bytes with enabled compression we get
embeded block, which then fails check for number of cloned blocks
in bclone_test.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Pawel Jakub Dawidek <pjd@FreeBSD.org>
Reviewed-by: Brian Atkinson <batkinson@lanl.gov>
Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Sponsored by: iXsystems, Inc.
Closes #16740
2024-11-15 15:02:44 -08:00
Rob Norris
648873f020
AUTHORS: refresh with recent new contributors
Welcome to the party 🎉

Sponsored-by: https://despairlabs.com/sponsor/
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Signed-off-by: Rob Norris <robn@despairlabs.com>
Closes #16762
2024-11-15 15:00:06 -08:00
Alexander Ziaee
cc0ed47523
style.mdoc: fix list width alignment instructions
MFC after:	3 days
2024-11-15 13:57:49 -05:00
José Luis Salvador Rufo
de2e9a5c6b
tests: fix uClibc for getversion.c
This patch fixes compilation with uClibc by applying the same fallback
as commit e12d76176d4e5454db62eb48b58ecd4970838a76 to the `getversion.c`
file, which was previously overlooked.
 
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: José Luis Salvador Rufo <salvador.joseluis@gmail.com>
Closes #16735
Closes #16741
2024-11-14 16:56:12 -08:00
Ameer Hamza
3462f3bd50
zvol_os.c: Increase optimal IO size
Since zvol read and write can process up to (DMU_MAX_ACCESS / 2) bytes
in a single operation, the current optimal I/O size is too low. SCST
directly reports this value as the optimal transfer length for the
target SCSI device. Increasing it from the previous volblocksize results
in performance improvement for large block parallel I/O workloads.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Signed-off-by: Ameer Hamza <ahamza@ixsystems.com>
Closes #16750
2024-11-14 14:14:33 -08:00
Mark Johnston
8dc452d907
Fix some nits in zfs_getpages()
- If we don't want dmu_read_pages() to perform extra readahead/behind,
  pass a pointer to 0 instead of a null pointer, as dum_read_pages()
  expects rahead and rbehind to be non-null.
- Avoid unneeded iterations in a loop.

Sponsored-by: Klara, Inc.
Reported-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Brian Atkinson <batkinson@lanl.gov>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Signed-off-by: Mark Johnston <markj@FreeBSD.org>
Closes #16758
2024-11-14 14:12:57 -08:00
Rob Norris
46c4f2ce0b
dsl_dataset: put IO-inducing frees on the pool deadlist
dsl_free() calls zio_free() to free the block. For most blocks, this
simply calls metaslab_free() without doing any IO or putting anything on
the IO pipeline.

Some blocks however require additional IO to free. This at least
includes gang, dedup and cloned blocks. For those, zio_free() will issue
a ZIO_TYPE_FREE IO and return.

If a huge number of blocks are being freed all at once, it's possible
for dsl_dataset_block_kill() to be called millions of time on a single
transaction (eg a 2T object of 128K blocks is 16M blocks). If those are
all IO-inducing frees, that then becomes 16M FREE IOs placed on the
pipeline. At time of writing, a zio_t is 1280 bytes, so for just one 2T
object that requires a 20G allocation of resident memory from the
zio_cache. If that can't be satisfied by the kernel, an out-of-memory
condition is raised.

This would be better handled by improving the cases that the
dmu_tx_assign() throttle will handle, or by reducing the overheads
required by the IO pipeline, or with a better central facility for
freeing blocks.

For now, we simply check for the cases that would cause zio_free() to
create a FREE IO, and instead put the block on the pool's freelist. This
is the same place that blocks from destroyed datasets go, and the async
destroy machinery will automatically see them and trickle them out as
normal.

Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Closes #6783
Closes #16708
Closes #16722 
Closes #16697
2024-11-13 07:38:42 -08:00
Alexander Motin
a60ed3822b
L2ARC: Move different stats updates earlier
..., before we make the header or the log block visible to others.
It should fix assertion on allocated space going negative if the
header is freed once the lock is dropped, while the write is still
going.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Rob Norris <robn@despairlabs.com>
Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #16040
Closes #16743
2024-11-13 07:31:50 -08:00
Mark Johnston
178682506f Grab the rangelock unconditionally in zfs_getpages()
As a deadlock avoidance measure, zfs_getpages() would only try to
acquire a rangelock, falling back to a single-page read if this was not
possible.  However, this is incompatible with direct I/O.

Instead, release the busy lock before trying to acquire the rangelock in
blocking mode.  This means that it's possible for the page to be
replaced, so we have to re-lookup.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Brian Atkinson <batkinson@lanl.gov>
Signed-off-by: Mark Johnston <markj@FreeBSD.org>
Closes #16643
2024-11-13 07:25:39 -08:00
Mark Johnston
25eb538778 Fix a potential page leak in mappedread_sf()
mappedread_sf() may allocate pages; if it fails to populate a page
can't free it, it needs to ensure that it's placed into a page queue,
otherwise it can't be reclaimed until the vnode is destroyed.

I think this is quite unlikely to happen in practice, it was noticed by
code inspection.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Brian Atkinson <batkinson@lanl.gov>
Signed-off-by: Mark Johnston <markj@FreeBSD.org>
Closes #16643
2024-11-13 07:24:14 -08:00
158 changed files with 3942 additions and 9032 deletions

View File

@ -51,6 +51,13 @@
# xargs -n1 | sort | uniq -d;
# done
# 20241124: library and tests of OpenBSD dc
OLD_FILES+=usr/share/misc/bc.library
OLD_FILES+=usr/tests/usr.bin/dc/Kyuafile
OLD_FILES+=usr/tests/usr.bin/dc/bcode
OLD_FILES+=usr/tests/usr.bin/dc/inout
OLD_DIRS+=usr/tests/usr.bin/dc
# 20241119: rewrite mv tests
OLD_FILES+=usr/tests/bin/mv/legacy_test

View File

@ -10,6 +10,11 @@ newline. Entries should be separated by a newline.
Changes to this file should not be MFCed.
b2f7c53430c3:
Kernel TLS is now enabled by default in kernels including KTLS
support. KTLS is included in GENERIC kernels for aarch64,
amd64, powerpc64, and powerpc64le.
f57efe95cc25:
New mididump(1) utility which dumps MIDI 1.0 events in real time.

View File

@ -27,6 +27,11 @@ NOTE TO PEOPLE WHO THINK THAT FreeBSD 15.x IS SLOW:
world, or to merely disable the most expensive debugging functionality
at runtime, run "ln -s 'abort:false,junk:false' /etc/malloc.conf".)
20241124:
The OpenBSD derived bc and dc implementations and the WITHOUT_GH_BC
option that allowed building them instead of the advanced version
imported more than 4 years ago have been removed.
20241025:
The support for the rc_fast_and_loose variable has been removed from
rc.subr(8). Users setting rc_fast_and_loose on their systems are

View File

@ -505,6 +505,8 @@
..
route
..
savecore
..
sysctl
..
..
@ -1089,8 +1091,6 @@
..
cut
..
dc
..
diff
..
diff3

View File

@ -31,6 +31,7 @@
#include <errno.h>
#include <signal.h>
#include <unistd.h>
#include <ssp/ssp.h>
#include "libc_private.h"

View File

@ -60,7 +60,7 @@ code in the global variable
.Va errno .
.Sh ERRORS
The
.Fn readlink
.Fn fhreadlink
system call
will fail if:
.Bl -tag -width Er
@ -87,7 +87,8 @@ is no longer valid
.El
.Sh SEE ALSO
.Xr fhlink 2 ,
.Xr fhstat 2
.Xr fhstat 2 ,
.Xr readlink 2
.Sh HISTORY
The
.Fn fhreadlink

View File

@ -25,7 +25,7 @@
.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
.\" SUCH DAMAGE.
.\"
.Dd July 8, 2024
.Dd November 25, 2024
.Dt GETSOCKOPT 2
.Os
.Sh NAME
@ -568,9 +568,14 @@ struct so_splice {
.Pp
Data received on
.Fa s
will automatically be transmitted from the socket specified in
will automatically be transmitted via the socket specified in
.Fa sp_fd
without any intervention by userspace.
That is, the data will be transmitted via
.Fa sp_fd
as if userspace had called
.Xr send 2
directly.
Splicing is a one-way operation; a given pair of sockets may be
spliced in one or both directions.
Currently only connected

View File

@ -138,6 +138,7 @@ is neither
nor a file descriptor associated with a directory.
.El
.Sh SEE ALSO
.Xr fhreadlink 2 ,
.Xr lstat 2 ,
.Xr stat 2 ,
.Xr symlink 2 ,

View File

@ -263,8 +263,8 @@ icmp_log_redirect="NO" # Set to YES to log ICMP REDIRECT packets
network_interfaces="auto" # List of network interfaces (or "auto").
cloned_interfaces="" # List of cloned network interfaces to create.
#cloned_interfaces="gif0 gif1 gif2 gif3" # Pre-cloning GENERIC config.
#ifconfig_lo0="inet 127.0.0.1" # default loopback device configuration.
#ifconfig_lo0_alias0="inet 127.0.0.254 netmask 0xffffffff" # Sample alias entry.
#ifconfig_lo0="inet 127.0.0.1/8" # default loopback device configuration.
#ifconfig_lo0_alias0="inet 127.0.0.254/32" # Sample alias entry.
#ifconfig_em0_ipv6="inet6 2001:db8:1::1 prefixlen 64" # Sample IPv6 addr entry
#ifconfig_em0_alias0="inet6 2001:db8:2::1 prefixlen 64" # Sample IPv6 alias
#ifconfig_em0_name="net0" # Change interface name from em0 to net0.

View File

@ -27,7 +27,7 @@ _var_run_load() {
_var_run_save() {
if [ ! -d $(dirname ${var_run_mtree}) ]; then
mkdir -p ${var_run_mtree}
mkdir -p $(dirname ${var_run_mtree})
fi
mtree -dcbj -p /var/run > ${var_run_mtree}
}

View File

@ -36,7 +36,7 @@
.\"
.\" $TSHeader: src/sbin/ffsinfo/ffsinfo.8,v 1.3 2000/12/12 19:30:55 tomsoft Exp $
.\"
.Dd September 8, 2000
.Dd November 19, 2024
.Dt FFSINFO 8
.Os
.Sh NAME
@ -120,6 +120,7 @@ to
.Pa /var/tmp/ffsinfo
with all available information.
.Sh SEE ALSO
.Xr ffs 4 ,
.Xr dumpfs 8 ,
.Xr fsck 8 ,
.Xr gpart 8 ,

View File

@ -25,7 +25,7 @@
.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
.\" SUCH DAMAGE.
.\"
.Dd January 24, 2024
.Dd November 19, 2024
.Dt MOUNT 8
.Os
.Sh NAME
@ -568,6 +568,7 @@ support for a particular file system might be provided either on a static
.Xr cd9660 4 ,
.Xr devfs 4 ,
.Xr ext2fs 4 ,
.Xr ffs 4 ,
.Xr mac 4 ,
.Xr procfs 4 ,
.Xr tarfs 4 ,

View File

@ -25,7 +25,7 @@
.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
.\" SUCH DAMAGE.
.\"
.Dd May 18, 2024
.Dd November 19, 2024
.Dt NEWFS 8
.Os
.Sh NAME
@ -350,6 +350,7 @@ than the historical defaults
This large fragment size may lead to much wasted space
on file systems that contain many small files.
.Sh SEE ALSO
.Xr ffs 4 ,
.Xr geom 4 ,
.Xr disktab 5 ,
.Xr fs 5 ,

View File

@ -18,4 +18,7 @@ CFLAGS+= -DWITH_CASPER
LIBADD+= casper cap_fileargs cap_syslog
.endif
HAS_TESTS=
SUBDIR.${MK_TESTS}+= tests
.include <bsd.prog.mk>

View File

@ -0,0 +1,3 @@
ATF_TESTS_SH= livedump_test
.include <bsd.test.mk>

View File

@ -0,0 +1,54 @@
#
# SPDX-License-Identifier: BSD-2-Clause
#
# Copyright (c) 2024 Mark Johnston <markj@FreeBSD.org>
#
atf_test_case livedump_kldstat
livedump_kldstat_head()
{
atf_set "descr" "Test livedump integrity"
atf_set "require.progs" kgdb
atf_set "require.user" root
}
livedump_kldstat_body()
{
atf_check savecore -L .
kernel=$(sysctl -n kern.bootfile)
if ! [ -f /usr/lib/debug/${kernel}.debug ]; then
atf_skip "No debug symbols for the running kernel"
fi
# Implement kldstat using gdb script.
cat >./kldstat.gdb <<'__EOF__'
printf "Id Refs Address Size Name\n"
set $_lf = linker_files.tqh_first
while ($_lf)
printf "%2d %4d %p %8x %s\n", $_lf->id, $_lf->refs, $_lf->address, $_lf->size, $_lf->filename
set $_lf = $_lf->link.tqe_next
end
__EOF__
# Ignore stderr since kgdb prints some warnings about inaccessible
# source files.
#
# Use a script to source the main gdb script, otherwise kgdb prints
# a bunch of line noise that is annoying to filter out.
echo "source ./kldstat.gdb" > ./script.gdb
atf_check -o save:out -e ignore \
kgdb -q ${kernel} ./livecore.0 < ./script.gdb
# Get rid of gunk printed by kgdb.
sed -i '' -n -e 's/^(kgdb) //' -e '/^Id Refs /,$p' out
# The output of kgdb should match the output of kldstat.
atf_check -o save:kldstat kldstat
atf_check diff kldstat out
}
atf_init_test_cases()
{
atf_add_test_case livedump_kldstat
}

View File

@ -22,7 +22,7 @@
.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
.\" SUCH DAMAGE.
.\"
.Dd June 3, 2019
.Dd November 25, 2024
.Dt CCR 4
.Os
.Sh NAME
@ -52,7 +52,10 @@ The driver accelerates AES-CBC, AES-CCM, AES-CTR, AES-GCM, AES-XTS,
SHA1, SHA2-224, SHA2-256, SHA2-384, SHA2-512,
SHA1-HMAC, SHA2-224-HMAC, SHA2-256-HMAC, SHA2-384-HMAC, and SHA2-512-HMAC
operations for
.Xr crypto 4
.Xr crypto 9
consumers such as
.Xr ktls 4 ,
.Xr geli 4 ,
and
.Xr ipsec 4 .
The driver also supports chaining one of AES-CBC, AES-CTR, or AES-XTS with
@ -97,7 +100,11 @@ email all the specific information related to the issue to
.Sh SEE ALSO
.Xr crypto 4 ,
.Xr cxgbe 4 ,
.Xr ipsec 4
.Xr geli 4 ,
.Xr ipsec 4 ,
.Xr ktls 4 ,
.Xr crypto 7 ,
.Xr crypto 9
.Sh HISTORY
The
.Nm

View File

@ -50,6 +50,7 @@ In
.Cd kern.vt.color.<colornum>.rgb="<colorspec>"
.Cd kern.vt.fb.default_mode="<X>x<Y>"
.Cd kern.vt.fb.modes.<connector>="<X>x<Y>"
.Cd kern.vt.slow_down=<delay>"
.Cd screen.font="<X>x<Y>"
.Pp
In
@ -266,6 +267,16 @@ It will contain a list of connectors and their associated tunables.
This is currently only supported by the
.Cm vt_fb
backend when it is paired with a KMS video driver.
.It Va kern.vt.slow_down
When debugging the kernel on modern laptops, the screen is often
the only available console, and relevant information will scroll
out of view before it can be captured by eye or camera.
.Pp
Setting
.Va kern.vt.slow_down
to a non-zero number will make console output synchronous (ie:
not dependent on timers and interrupts) and slow it down in proportion
to the number.
.It Va screen.font
Set this value to the base name of the desired font file located in
.Pa /boot/fonts .

View File

@ -712,12 +712,6 @@ and
.Xr ftpd 8 .
.It Va WITHOUT_GAMES
Do not build games.
.It Va WITHOUT_GH_BC
Install the traditional FreeBSD
.Xr bc 1
and
.Xr dc 1
programs instead of the enhanced versions.
.It Va WITHOUT_GNU_DIFF
Do not build GNU
.Xr diff3 1 ;

View File

@ -124,7 +124,8 @@ The
.Fl width
argument to the
.Sy \&.Bl
macro should match the length of the longest item in the list, e.g.:
macro should match the length of the rendered first longest item in the list,
e.g.:
.Bd -literal -offset indent
\&.Bl -tag -width "-a address"
\&.It Fl a Ar address

View File

@ -101,7 +101,6 @@ __DEFAULT_YES_OPTIONS = \
FREEBSD_UPDATE \
FTP \
GAMES \
GH_BC \
GNU_DIFF \
GOOGLETEST \
GPIO \

View File

@ -4082,7 +4082,19 @@ pmap_qremove(vm_offset_t sva, int count)
va = sva;
while (count-- > 0) {
/*
* pmap_enter() calls within the kernel virtual
* address space happen on virtual addresses from
* subarenas that import superpage-sized and -aligned
* address ranges. So, the virtual address that we
* allocate to use with pmap_qenter() can't be close
* enough to one of those pmap_enter() calls for it to
* be caught up in a promotion.
*/
KASSERT(va >= VM_MIN_KERNEL_ADDRESS, ("usermode va %lx", va));
KASSERT((*vtopde(va) & X86_PG_PS) == 0,
("pmap_qremove on promoted va %#lx", va));
pmap_kremove(va);
va += PAGE_SIZE;
}
@ -10506,8 +10518,7 @@ pmap_map_io_transient(vm_page_t page[], vm_offset_t vaddr[], int count,
{
vm_paddr_t paddr;
bool needs_mapping;
pt_entry_t *pte;
int cache_bits, error __unused, i;
int error __unused, i;
/*
* Allocate any KVA space that we need, this is done in a separate
@ -10552,11 +10563,8 @@ pmap_map_io_transient(vm_page_t page[], vm_offset_t vaddr[], int count,
*/
pmap_qenter(vaddr[i], &page[i], 1);
} else {
pte = vtopte(vaddr[i]);
cache_bits = pmap_cache_bits(kernel_pmap,
page[i]->md.pat_mode, false);
pte_store(pte, paddr | X86_PG_RW | X86_PG_V |
cache_bits);
pmap_kenter_attr(vaddr[i], paddr,
page[i]->md.pat_mode);
pmap_invlpg(kernel_pmap, vaddr[i]);
}
}

View File

@ -101,14 +101,19 @@ get_vfpcontext(struct thread *td, mcontext_vfp_t *vfp)
P_SHOULDSTOP(td->td_proc));
pcb = td->td_pcb;
if ((pcb->pcb_fpflags & PCB_FP_STARTED) != 0 && td == curthread) {
if (td == curthread) {
critical_enter();
vfp_store(&pcb->pcb_vfpstate, false);
critical_exit();
}
KASSERT(pcb->pcb_vfpsaved == &pcb->pcb_vfpstate,
("Called get_vfpcontext while the kernel is using the VFP"));
memcpy(vfp, &pcb->pcb_vfpstate, sizeof(*vfp));
memset(vfp, 0, sizeof(*vfp));
memcpy(vfp->mcv_reg, pcb->pcb_vfpstate.reg,
sizeof(vfp->mcv_reg));
vfp->mcv_fpscr = pcb->pcb_vfpstate.fpscr;
}
/*
@ -127,7 +132,10 @@ set_vfpcontext(struct thread *td, mcontext_vfp_t *vfp)
}
KASSERT(pcb->pcb_vfpsaved == &pcb->pcb_vfpstate,
("Called set_vfpcontext while the kernel is using the VFP"));
memcpy(&pcb->pcb_vfpstate, vfp, sizeof(*vfp));
memcpy(pcb->pcb_vfpstate.reg, vfp->mcv_reg,
sizeof(pcb->pcb_vfpstate.reg));
pcb->pcb_vfpstate.fpscr = vfp->mcv_fpscr;
}
#endif
@ -163,8 +171,6 @@ get_mcontext(struct thread *td, mcontext_t *mcp, int clear_ret)
{
struct trapframe *tf = td->td_frame;
__greg_t *gr = mcp->__gregs;
mcontext_vfp_t mcontext_vfp;
int rv;
if (clear_ret & GET_MC_CLEAR_RET) {
gr[_REG_R0] = 0;
@ -189,19 +195,9 @@ get_mcontext(struct thread *td, mcontext_t *mcp, int clear_ret)
gr[_REG_LR] = tf->tf_usr_lr;
gr[_REG_PC] = tf->tf_pc;
#ifdef VFP
if (mcp->mc_vfp_size != sizeof(mcontext_vfp_t))
return (EINVAL);
get_vfpcontext(td, &mcontext_vfp);
#else
bzero(&mcontext_vfp, sizeof(mcontext_vfp));
#endif
if (mcp->mc_vfp_ptr != NULL) {
rv = copyout(&mcontext_vfp, mcp->mc_vfp_ptr, sizeof(mcontext_vfp));
if (rv != 0)
return (rv);
}
mcp->mc_vfp_size = 0;
mcp->mc_vfp_ptr = NULL;
memset(&mcp->mc_spare, 0, sizeof(mcp->mc_spare));
return (0);
}
@ -315,6 +311,16 @@ sendsig(sig_t catcher, ksiginfo_t *ksi, sigset_t *mask)
/* Populate the siginfo frame. */
bzero(&frame, sizeof(frame));
get_mcontext(td, &frame.sf_uc.uc_mcontext, 0);
#ifdef VFP
get_vfpcontext(td, &frame.sf_vfp);
frame.sf_uc.uc_mcontext.mc_vfp_size = sizeof(fp->sf_vfp);
frame.sf_uc.uc_mcontext.mc_vfp_ptr = &fp->sf_vfp;
#else
frame.sf_uc.uc_mcontext.mc_vfp_size = 0;
frame.sf_uc.uc_mcontext.mc_vfp_ptr = NULL;
#endif
frame.sf_si = ksi->ksi_info;
frame.sf_uc.uc_sigmask = *mask;
frame.sf_uc.uc_stack = td->td_sigstk;

View File

@ -51,11 +51,7 @@ extern "C" {
#include <sys/file.h>
#ifndef illumos
#ifdef __sparcv9
typedef uint32_t pc_t;
#else
typedef uintptr_t pc_t;
#endif
typedef u_long greg_t;
#endif

View File

@ -29,6 +29,7 @@
#include <sys/systm.h>
#include <sys/dtrace_impl.h>
#include <sys/kernel.h>
#include <sys/msan.h>
#include <sys/stack.h>
#include <sys/pcpu.h>
@ -73,6 +74,8 @@ dtrace_getpcstack(pc_t *pcstack, int pcstack_limit, int aframes,
frame = (struct amd64_frame *)rbp;
td = curthread;
while (depth < pcstack_limit) {
kmsan_mark(frame, sizeof(*frame), KMSAN_STATE_INITED);
if (!kstack_contains(curthread, (vm_offset_t)frame,
sizeof(*frame)))
break;
@ -99,6 +102,7 @@ dtrace_getpcstack(pc_t *pcstack, int pcstack_limit, int aframes,
for (; depth < pcstack_limit; depth++) {
pcstack[depth] = 0;
}
kmsan_check(pcstack, pcstack_limit * sizeof(*pcstack), "dtrace");
}
static int
@ -399,8 +403,10 @@ dtrace_getarg(int arg, int aframes)
goto load;
}
for (i = 1; i <= aframes; i++)
for (i = 1; i <= aframes; i++) {
kmsan_mark(fp, sizeof(*fp), KMSAN_STATE_INITED);
fp = fp->f_frame;
}
/*
* We know that we did not come through a trap to get into
@ -430,6 +436,8 @@ load:
val = stack[arg];
DTRACE_CPUFLAG_CLEAR(CPU_DTRACE_NOFAULT);
kmsan_mark(&val, sizeof(val), KMSAN_STATE_INITED);
return (val);
}
@ -444,10 +452,13 @@ dtrace_getstackdepth(int aframes)
rbp = dtrace_getfp();
frame = (struct amd64_frame *)rbp;
depth++;
for(;;) {
for (;;) {
kmsan_mark(frame, sizeof(*frame), KMSAN_STATE_INITED);
if (!kstack_contains(curthread, (vm_offset_t)frame,
sizeof(*frame)))
break;
depth++;
if (frame->f_frame <= frame)
break;
@ -574,76 +585,100 @@ void
dtrace_copyin(uintptr_t uaddr, uintptr_t kaddr, size_t size,
volatile uint16_t *flags)
{
if (dtrace_copycheck(uaddr, kaddr, size))
if (dtrace_copycheck(uaddr, kaddr, size)) {
dtrace_copy(uaddr, kaddr, size);
kmsan_mark((void *)kaddr, size, KMSAN_STATE_INITED);
}
}
void
dtrace_copyout(uintptr_t kaddr, uintptr_t uaddr, size_t size,
volatile uint16_t *flags)
{
if (dtrace_copycheck(uaddr, kaddr, size))
if (dtrace_copycheck(uaddr, kaddr, size)) {
kmsan_check((void *)kaddr, size, "dtrace_copyout");
dtrace_copy(kaddr, uaddr, size);
}
}
void
dtrace_copyinstr(uintptr_t uaddr, uintptr_t kaddr, size_t size,
volatile uint16_t *flags)
{
if (dtrace_copycheck(uaddr, kaddr, size))
if (dtrace_copycheck(uaddr, kaddr, size)) {
dtrace_copystr(uaddr, kaddr, size, flags);
kmsan_mark((void *)kaddr, size, KMSAN_STATE_INITED);
}
}
void
dtrace_copyoutstr(uintptr_t kaddr, uintptr_t uaddr, size_t size,
volatile uint16_t *flags)
{
if (dtrace_copycheck(uaddr, kaddr, size))
if (dtrace_copycheck(uaddr, kaddr, size)) {
kmsan_check((void *)kaddr, size, "dtrace_copyoutstr");
dtrace_copystr(kaddr, uaddr, size, flags);
}
}
uint8_t
dtrace_fuword8(void *uaddr)
{
uint8_t val;
if ((uintptr_t)uaddr > VM_MAXUSER_ADDRESS) {
DTRACE_CPUFLAG_SET(CPU_DTRACE_BADADDR);
cpu_core[curcpu].cpuc_dtrace_illval = (uintptr_t)uaddr;
return (0);
}
return (dtrace_fuword8_nocheck(uaddr));
val = dtrace_fuword8_nocheck(uaddr);
kmsan_mark(&val, sizeof(val), KMSAN_STATE_INITED);
return (val);
}
uint16_t
dtrace_fuword16(void *uaddr)
{
uint16_t val;
if ((uintptr_t)uaddr > VM_MAXUSER_ADDRESS) {
DTRACE_CPUFLAG_SET(CPU_DTRACE_BADADDR);
cpu_core[curcpu].cpuc_dtrace_illval = (uintptr_t)uaddr;
return (0);
}
return (dtrace_fuword16_nocheck(uaddr));
val = dtrace_fuword16_nocheck(uaddr);
kmsan_mark(&val, sizeof(val), KMSAN_STATE_INITED);
return (val);
}
uint32_t
dtrace_fuword32(void *uaddr)
{
uint32_t val;
if ((uintptr_t)uaddr > VM_MAXUSER_ADDRESS) {
DTRACE_CPUFLAG_SET(CPU_DTRACE_BADADDR);
cpu_core[curcpu].cpuc_dtrace_illval = (uintptr_t)uaddr;
return (0);
}
return (dtrace_fuword32_nocheck(uaddr));
val = dtrace_fuword32_nocheck(uaddr);
kmsan_mark(&val, sizeof(val), KMSAN_STATE_INITED);
return (val);
}
uint64_t
dtrace_fuword64(void *uaddr)
{
uint64_t val;
if ((uintptr_t)uaddr > VM_MAXUSER_ADDRESS) {
DTRACE_CPUFLAG_SET(CPU_DTRACE_BADADDR);
cpu_core[curcpu].cpuc_dtrace_illval = (uintptr_t)uaddr;
return (0);
}
return (dtrace_fuword64_nocheck(uaddr));
val = dtrace_fuword64_nocheck(uaddr);
kmsan_mark(&val, sizeof(val), KMSAN_STATE_INITED);
return (val);
}
/*

View File

@ -136,6 +136,13 @@ fbt_excluded(const char *name)
strcmp(name, "owner_sx") == 0)
return (1);
/*
* The KMSAN runtime can't be instrumented safely.
*/
if (strncmp(name, "__msan", 6) == 0 ||
strncmp(name, "kmsan_", 6) == 0)
return (1);
/*
* Stack unwinders may be called from probe context on some
* platforms.

View File

@ -132,6 +132,13 @@ kinst_excluded(const char *name)
strcmp(name, "owner_sx") == 0)
return (true);
/*
* The KMSAN runtime can't be instrumented safely.
*/
if (strncmp(name, "__msan", 6) == 0 ||
strncmp(name, "kmsan_", 6) == 0)
return (1);
/*
* When DTrace is built into the kernel we need to exclude the kinst
* functions from instrumentation.

View File

@ -36,6 +36,7 @@ riscv/riscv/bus_machdep.c standard
riscv/riscv/bus_space_asm.S standard
riscv/riscv/busdma_bounce.c standard
riscv/riscv/busdma_machdep.c standard
riscv/riscv/cache.c standard
riscv/riscv/clock.c standard
riscv/riscv/copyinout.S standard
riscv/riscv/cpufunc_asm.S standard
@ -83,5 +84,7 @@ riscv/vmm/vmm_riscv.c optional vmm
riscv/vmm/vmm_sbi.c optional vmm
riscv/vmm/vmm_switch.S optional vmm
riscv/thead/thead.c standard
# Zstd
contrib/zstd/lib/freebsd/zstd_kfreebsd.c optional zstdio compile-with ${ZSTD_C}

View File

@ -70,6 +70,7 @@ Rob Norris <robn@despairlabs.com>
Rob Norris <rob.norris@klarasystems.com>
Sam Lunt <samuel.j.lunt@gmail.com>
Sanjeev Bagewadi <sanjeev.bagewadi@gmail.com>
Sebastian Wuerl <s.wuerl@mailbox.org>
Stoiko Ivanov <github@nomore.at>
Tamas TEVESZ <ice@extreme.hu>
WHR <msl0000023508@gmail.com>
@ -78,6 +79,7 @@ Youzhong Yang <youzhong@gmail.com>
# Signed-off-by: overriding Author:
Ryan <errornointernet@envs.net> <error.nointernet@gmail.com>
Sietse <sietse@wizdom.nu> <uglymotha@wizdom.nu>
Qiuhao Chen <chenqiuhao1997@gmail.com> <haohao0924@126.com>
Yuxin Wang <yuxinwang9999@gmail.com> <Bi11gates9999@gmail.com>
Zhenlei Huang <zlei@FreeBSD.org> <zlei.huang@gmail.com>

View File

@ -423,6 +423,7 @@ CONTRIBUTORS:
Mathieu Velten <matmaul@gmail.com>
Matt Fiddaman <github@m.fiddaman.uk>
Matthew Ahrens <matt@delphix.com>
Matthew Heller <matthew.f.heller@gmail.com>
Matthew Thode <mthode@mthode.org>
Matthias Blankertz <matthias@blankertz.org>
Matt Johnston <matt@fugro-fsi.com.au>
@ -562,6 +563,7 @@ CONTRIBUTORS:
Scot W. Stevenson <scot.stevenson@gmail.com>
Sean Eric Fagan <sef@ixsystems.com>
Sebastian Gottschall <s.gottschall@dd-wrt.com>
Sebastian Wuerl <s.wuerl@mailbox.org>
Sebastien Roy <seb@delphix.com>
Sen Haerens <sen@senhaerens.be>
Serapheim Dimitropoulos <serapheim@delphix.com>
@ -574,6 +576,7 @@ CONTRIBUTORS:
Shawn Bayern <sbayern@law.fsu.edu>
Shengqi Chen <harry-chen@outlook.com>
Shen Yan <shenyanxxxy@qq.com>
Sietse <sietse@wizdom.nu>
Simon Guest <simon.guest@tesujimath.org>
Simon Klinkert <simon.klinkert@gmail.com>
Sowrabha Gopal <sowrabha.gopal@delphix.com>
@ -629,6 +632,7 @@ CONTRIBUTORS:
Trevor Bautista <trevrb@trevrb.net>
Trey Dockendorf <treydock@gmail.com>
Troels Nørgaard <tnn@tradeshift.com>
tstabrawa <tstabrawa@users.noreply.github.com>
Tulsi Jain <tulsi.jain@delphix.com>
Turbo Fredriksson <turbo@bayour.com>
Tyler J. Stachecki <stachecki.tyler@gmail.com>

View File

@ -6,5 +6,5 @@ Release: 1
Release-Tags: relext
License: CDDL
Author: OpenZFS
Linux-Maximum: 6.11
Linux-Maximum: 6.12
Linux-Minimum: 4.18

View File

@ -662,10 +662,7 @@ def section_arc(kstats_dict):
print()
print('ARC hash breakdown:')
prt_i1('Elements max:', f_hits(arc_stats['hash_elements_max']))
prt_i2('Elements current:',
f_perc(arc_stats['hash_elements'], arc_stats['hash_elements_max']),
f_hits(arc_stats['hash_elements']))
prt_i1('Elements:', f_hits(arc_stats['hash_elements']))
prt_i1('Collisions:', f_hits(arc_stats['hash_collisions']))
prt_i1('Chain max:', f_hits(arc_stats['hash_chain_max']))

View File

@ -2119,9 +2119,6 @@ dump_brt(spa_t *spa)
return;
}
brt_t *brt = spa->spa_brt;
VERIFY(brt);
char count[32], used[32], saved[32];
zdb_nicebytes(brt_get_used(spa), used, sizeof (used));
zdb_nicebytes(brt_get_saved(spa), saved, sizeof (saved));
@ -2132,11 +2129,8 @@ dump_brt(spa_t *spa)
if (dump_opt['T'] < 2)
return;
for (uint64_t vdevid = 0; vdevid < brt->brt_nvdevs; vdevid++) {
brt_vdev_t *brtvd = &brt->brt_vdevs[vdevid];
if (brtvd == NULL)
continue;
for (uint64_t vdevid = 0; vdevid < spa->spa_brt_nvdevs; vdevid++) {
brt_vdev_t *brtvd = spa->spa_brt_vdevs[vdevid];
if (!brtvd->bv_initiated) {
printf("BRT: vdev %" PRIu64 ": empty\n", vdevid);
continue;
@ -2160,20 +2154,21 @@ dump_brt(spa_t *spa)
if (!do_histo)
printf("\n%-16s %-10s\n", "DVA", "REFCNT");
for (uint64_t vdevid = 0; vdevid < brt->brt_nvdevs; vdevid++) {
brt_vdev_t *brtvd = &brt->brt_vdevs[vdevid];
if (brtvd == NULL || !brtvd->bv_initiated)
for (uint64_t vdevid = 0; vdevid < spa->spa_brt_nvdevs; vdevid++) {
brt_vdev_t *brtvd = spa->spa_brt_vdevs[vdevid];
if (!brtvd->bv_initiated)
continue;
uint64_t counts[64] = {};
zap_cursor_t zc;
zap_attribute_t *za = zap_attribute_alloc();
for (zap_cursor_init(&zc, brt->brt_mos, brtvd->bv_mos_entries);
for (zap_cursor_init(&zc, spa->spa_meta_objset,
brtvd->bv_mos_entries);
zap_cursor_retrieve(&zc, za) == 0;
zap_cursor_advance(&zc)) {
uint64_t refcnt;
VERIFY0(zap_lookup_uint64(brt->brt_mos,
VERIFY0(zap_lookup_uint64(spa->spa_meta_objset,
brtvd->bv_mos_entries,
(const uint64_t *)za->za_name, 1,
za->za_integer_length, za->za_num_integers,
@ -8227,14 +8222,11 @@ dump_mos_leaks(spa_t *spa)
}
}
if (spa->spa_brt != NULL) {
brt_t *brt = spa->spa_brt;
for (uint64_t vdevid = 0; vdevid < brt->brt_nvdevs; vdevid++) {
brt_vdev_t *brtvd = &brt->brt_vdevs[vdevid];
if (brtvd != NULL && brtvd->bv_initiated) {
mos_obj_refd(brtvd->bv_mos_brtvdev);
mos_obj_refd(brtvd->bv_mos_entries);
}
for (uint64_t vdevid = 0; vdevid < spa->spa_brt_nvdevs; vdevid++) {
brt_vdev_t *brtvd = spa->spa_brt_vdevs[vdevid];
if (brtvd->bv_initiated) {
mos_obj_refd(brtvd->bv_mos_brtvdev);
mos_obj_refd(brtvd->bv_mos_entries);
}
}

View File

@ -445,8 +445,8 @@ zfs_retire_recv(fmd_hdl_t *hdl, fmd_event_t *ep, nvlist_t *nvl,
* its a loopback event from spa_async_remove(). Just
* ignore it.
*/
if (vs->vs_state == VDEV_STATE_REMOVED &&
state == VDEV_STATE_REMOVED)
if ((vs->vs_state == VDEV_STATE_REMOVED && state ==
VDEV_STATE_REMOVED) || vs->vs_state == VDEV_STATE_OFFLINE)
return;
/* Remove the vdev since device is unplugged */

View File

@ -201,7 +201,7 @@ spl_assert(const char *buf, const char *file, const char *func, int line)
"failed (%lld " #OP " %lld) " STR "\n", \
(long long)(_verify3_left), \
(long long)(_verify3_right), \
__VA_ARGS); \
__VA_ARGS__); \
} while (0)
#define VERIFY3UF(LEFT, OP, RIGHT, STR, ...) do { \
@ -213,7 +213,7 @@ spl_assert(const char *buf, const char *file, const char *func, int line)
"failed (%llu " #OP " %llu) " STR "\n", \
(unsigned long long)(_verify3_left), \
(unsigned long long)(_verify3_right), \
__VA_ARGS); \
__VA_ARGS__); \
} while (0)
#define VERIFY3PF(LEFT, OP, RIGHT, STR, ...) do { \

View File

@ -98,11 +98,9 @@ vn_flush_cached_data(vnode_t *vp, boolean_t sync)
{
if (vm_object_mightbedirty(vp->v_object)) {
int flags = sync ? OBJPC_SYNC : 0;
vn_lock(vp, LK_SHARED | LK_RETRY);
zfs_vmobject_wlock(vp->v_object);
vm_object_page_clean(vp->v_object, 0, 0, flags);
zfs_vmobject_wunlock(vp->v_object);
VOP_UNLOCK(vp);
}
}
#endif

View File

@ -205,7 +205,7 @@ spl_assert(const char *buf, const char *file, const char *func, int line)
"failed (%lld " #OP " %lld) " STR "\n", \
(long long)(_verify3_left), \
(long long)(_verify3_right), \
__VA_ARGS); \
__VA_ARGS__); \
} while (0)
#define VERIFY3UF(LEFT, OP, RIGHT, STR, ...) do { \
@ -217,7 +217,7 @@ spl_assert(const char *buf, const char *file, const char *func, int line)
"failed (%llu " #OP " %llu) " STR "\n", \
(unsigned long long)(_verify3_left), \
(unsigned long long)(_verify3_right), \
__VA_ARGS); \
__VA_ARGS__); \
} while (0)
#define VERIFY3PF(LEFT, OP, RIGHT, STR, ...) do { \

View File

@ -347,6 +347,7 @@ void l2arc_fini(void);
void l2arc_start(void);
void l2arc_stop(void);
void l2arc_spa_rebuild_start(spa_t *spa);
void l2arc_spa_rebuild_stop(spa_t *spa);
#ifndef _KERNEL
extern boolean_t arc_watch;

View File

@ -942,6 +942,7 @@ typedef struct arc_sums {
wmsum_t arcstat_evict_l2_eligible_mru;
wmsum_t arcstat_evict_l2_ineligible;
wmsum_t arcstat_evict_l2_skip;
wmsum_t arcstat_hash_elements;
wmsum_t arcstat_hash_collisions;
wmsum_t arcstat_hash_chains;
aggsum_t arcstat_size;

View File

@ -86,28 +86,38 @@ typedef struct brt_vdev_phys {
uint64_t bvp_savedspace;
} brt_vdev_phys_t;
typedef struct brt_vdev {
struct brt_vdev {
/*
* Pending changes from open contexts.
*/
kmutex_t bv_pending_lock;
avl_tree_t bv_pending_tree[TXG_SIZE];
/*
* Protects bv_mos_*.
*/
krwlock_t bv_mos_entries_lock ____cacheline_aligned;
/*
* Protects all the fields starting from bv_initiated.
*/
krwlock_t bv_lock ____cacheline_aligned;
/*
* VDEV id.
*/
uint64_t bv_vdevid;
/*
* Is the structure initiated?
* (bv_entcount and bv_bitmap are allocated?)
*/
boolean_t bv_initiated;
uint64_t bv_vdevid ____cacheline_aligned;
/*
* Object number in the MOS for the entcount array and brt_vdev_phys.
*/
uint64_t bv_mos_brtvdev;
/*
* Object number in the MOS for the entries table.
* Object number in the MOS and dnode for the entries table.
*/
uint64_t bv_mos_entries;
dnode_t *bv_mos_entries_dnode;
/*
* Entries to sync.
* Is the structure initiated?
* (bv_entcount and bv_bitmap are allocated?)
*/
avl_tree_t bv_tree;
boolean_t bv_initiated;
/*
* Does the bv_entcount[] array needs byte swapping?
*/
@ -120,6 +130,26 @@ typedef struct brt_vdev {
* This is the array with BRT entry count per BRT_RANGESIZE.
*/
uint16_t *bv_entcount;
/*
* bv_entcount[] potentially can be a bit too big to sychronize it all
* when we just changed few entcounts. The fields below allow us to
* track updates to bv_entcount[] array since the last sync.
* A single bit in the bv_bitmap represents as many entcounts as can
* fit into a single BRT_BLOCKSIZE.
* For example we have 65536 entcounts in the bv_entcount array
* (so the whole array is 128kB). We updated bv_entcount[2] and
* bv_entcount[5]. In that case only first bit in the bv_bitmap will
* be set and we will write only first BRT_BLOCKSIZE out of 128kB.
*/
ulong_t *bv_bitmap;
/*
* bv_entcount[] needs updating on disk.
*/
boolean_t bv_entcount_dirty;
/*
* brt_vdev_phys needs updating on disk.
*/
boolean_t bv_meta_dirty;
/*
* Sum of all bv_entcount[]s.
*/
@ -133,65 +163,27 @@ typedef struct brt_vdev {
*/
uint64_t bv_savedspace;
/*
* brt_vdev_phys needs updating on disk.
* Entries to sync.
*/
boolean_t bv_meta_dirty;
/*
* bv_entcount[] needs updating on disk.
*/
boolean_t bv_entcount_dirty;
/*
* bv_entcount[] potentially can be a bit too big to sychronize it all
* when we just changed few entcounts. The fields below allow us to
* track updates to bv_entcount[] array since the last sync.
* A single bit in the bv_bitmap represents as many entcounts as can
* fit into a single BRT_BLOCKSIZE.
* For example we have 65536 entcounts in the bv_entcount array
* (so the whole array is 128kB). We updated bv_entcount[2] and
* bv_entcount[5]. In that case only first bit in the bv_bitmap will
* be set and we will write only first BRT_BLOCKSIZE out of 128kB.
*/
ulong_t *bv_bitmap;
uint64_t bv_nblocks;
} brt_vdev_t;
avl_tree_t bv_tree;
};
/*
* In-core brt
*/
typedef struct brt {
krwlock_t brt_lock;
spa_t *brt_spa;
#define brt_mos brt_spa->spa_meta_objset
uint64_t brt_rangesize;
uint64_t brt_usedspace;
uint64_t brt_savedspace;
avl_tree_t brt_pending_tree[TXG_SIZE];
kmutex_t brt_pending_lock[TXG_SIZE];
/* Sum of all entries across all bv_trees. */
uint64_t brt_nentries;
brt_vdev_t *brt_vdevs;
uint64_t brt_nvdevs;
} brt_t;
/* Size of bre_offset / sizeof (uint64_t). */
/* Size of offset / sizeof (uint64_t). */
#define BRT_KEY_WORDS (1)
#define BRE_OFFSET(bre) (DVA_GET_OFFSET(&(bre)->bre_bp.blk_dva[0]))
/*
* In-core brt entry.
* On-disk we use bre_offset as the key and bre_refcount as the value.
* On-disk we use ZAP with offset as the key and count as the value.
*/
typedef struct brt_entry {
uint64_t bre_offset;
uint64_t bre_refcount;
avl_node_t bre_node;
blkptr_t bre_bp;
uint64_t bre_count;
uint64_t bre_pcount;
} brt_entry_t;
typedef struct brt_pending_entry {
blkptr_t bpe_bp;
int bpe_count;
avl_node_t bpe_node;
} brt_pending_entry_t;
#ifdef __cplusplus
}
#endif

View File

@ -53,6 +53,7 @@ extern "C" {
/*
* Forward references that lots of things need.
*/
typedef struct brt_vdev brt_vdev_t;
typedef struct spa spa_t;
typedef struct vdev vdev_t;
typedef struct metaslab metaslab_t;

View File

@ -412,8 +412,12 @@ struct spa {
uint64_t spa_dedup_dspace; /* Cache get_dedup_dspace() */
uint64_t spa_dedup_checksum; /* default dedup checksum */
uint64_t spa_dspace; /* dspace in normal class */
uint64_t spa_rdspace; /* raw (non-dedup) --//-- */
boolean_t spa_active_ddt_prune; /* ddt prune process active */
struct brt *spa_brt; /* in-core BRT */
brt_vdev_t **spa_brt_vdevs; /* array of per-vdev BRTs */
uint64_t spa_brt_nvdevs; /* number of vdevs in BRT */
uint64_t spa_brt_rangesize; /* pool's BRT range size */
krwlock_t spa_brt_lock; /* Protects brt_vdevs/nvdevs */
kmutex_t spa_vdev_top_lock; /* dueling offline/remove */
kmutex_t spa_proc_lock; /* protects spa_proc* */
kcondvar_t spa_proc_cv; /* spa_proc_state transitions */

View File

@ -223,11 +223,15 @@ int zap_lookup_norm(objset_t *ds, uint64_t zapobj, const char *name,
boolean_t *normalization_conflictp);
int zap_lookup_uint64(objset_t *os, uint64_t zapobj, const uint64_t *key,
int key_numints, uint64_t integer_size, uint64_t num_integers, void *buf);
int zap_lookup_uint64_by_dnode(dnode_t *dn, const uint64_t *key,
int key_numints, uint64_t integer_size, uint64_t num_integers, void *buf);
int zap_contains(objset_t *ds, uint64_t zapobj, const char *name);
int zap_prefetch(objset_t *os, uint64_t zapobj, const char *name);
int zap_prefetch_object(objset_t *os, uint64_t zapobj);
int zap_prefetch_uint64(objset_t *os, uint64_t zapobj, const uint64_t *key,
int key_numints);
int zap_prefetch_uint64_by_dnode(dnode_t *dn, const uint64_t *key,
int key_numints);
int zap_lookup_by_dnode(dnode_t *dn, const char *name,
uint64_t integer_size, uint64_t num_integers, void *buf);
@ -236,9 +240,6 @@ int zap_lookup_norm_by_dnode(dnode_t *dn, const char *name,
matchtype_t mt, char *realname, int rn_len,
boolean_t *ncp);
int zap_count_write_by_dnode(dnode_t *dn, const char *name,
int add, zfs_refcount_t *towrite, zfs_refcount_t *tooverwrite);
/*
* Create an attribute with the given name and value.
*

View File

@ -109,7 +109,7 @@ Stops and cancels an in-progress removal of a top-level vdev.
.El
.
.Sh EXAMPLES
.\" These are, respectively, examples 14 from zpool.8
.\" These are, respectively, examples 15 from zpool.8
.\" Make sure to update them bidirectionally
.Ss Example 1 : No Removing a Mirrored top-level (Log or Data) Device
The following commands remove the mirrored log device
@ -142,9 +142,43 @@ The command to remove the mirrored log
.Ar mirror-2 No is :
.Dl # Nm zpool Cm remove Ar tank mirror-2
.Pp
At this point, the log device no longer exists
(both sides of the mirror have been removed):
.Bd -literal -compact -offset Ds
pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
.Ed
.Pp
The command to remove the mirrored data
.Ar mirror-1 No is :
.Dl # Nm zpool Cm remove Ar tank mirror-1
.Pp
After
.Ar mirror-1 No has been evacuated, the pool remains redundant, but
the total amount of space is reduced:
.Bd -literal -compact -offset Ds
pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
.Ed
.
.Sh SEE ALSO
.Xr zpool-add 8 ,

View File

@ -405,9 +405,43 @@ The command to remove the mirrored log
.Ar mirror-2 No is :
.Dl # Nm zpool Cm remove Ar tank mirror-2
.Pp
At this point, the log device no longer exists
(both sides of the mirror have been removed):
.Bd -literal -compact -offset Ds
pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
.Ed
.Pp
The command to remove the mirrored data
.Ar mirror-1 No is :
.Dl # Nm zpool Cm remove Ar tank mirror-1
.Pp
After
.Ar mirror-1 No has been evacuated, the pool remains redundant, but
the total amount of space is reduced:
.Bd -literal -compact -offset Ds
pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
.Ed
.
.Ss Example 16 : No Displaying expanded space on a device
The following command displays the detailed information for the pool

View File

@ -291,8 +291,12 @@ zfs_ioctl(vnode_t *vp, ulong_t com, intptr_t data, int flag, cred_t *cred,
case F_SEEK_HOLE:
{
off = *(offset_t *)data;
error = vn_lock(vp, LK_SHARED);
if (error)
return (error);
/* offset parameter is in/out */
error = zfs_holey(VTOZ(vp), com, &off);
VOP_UNLOCK(vp);
if (error)
return (error);
*(offset_t *)data = off;
@ -452,8 +456,10 @@ mappedread_sf(znode_t *zp, int nbytes, zfs_uio_t *uio)
if (!vm_page_wired(pp) && pp->valid == 0 &&
vm_page_busy_tryupgrade(pp))
vm_page_free(pp);
else
else {
vm_page_deactivate_noreuse(pp);
vm_page_sunbusy(pp);
}
zfs_vmobject_wunlock(obj);
}
} else {
@ -3928,6 +3934,7 @@ zfs_getpages(struct vnode *vp, vm_page_t *ma, int count, int *rbehind,
if (zfs_enter_verify_zp(zfsvfs, zp, FTAG) != 0)
return (zfs_vm_pagerret_error);
object = ma[0]->object;
start = IDX_TO_OFF(ma[0]->pindex);
end = IDX_TO_OFF(ma[count - 1]->pindex + 1);
@ -3936,33 +3943,47 @@ zfs_getpages(struct vnode *vp, vm_page_t *ma, int count, int *rbehind,
* Note that we need to handle the case of the block size growing.
*/
for (;;) {
uint64_t len;
blksz = zp->z_blksz;
len = roundup(end, blksz) - rounddown(start, blksz);
lr = zfs_rangelock_tryenter(&zp->z_rangelock,
rounddown(start, blksz),
roundup(end, blksz) - rounddown(start, blksz), RL_READER);
rounddown(start, blksz), len, RL_READER);
if (lr == NULL) {
if (rahead != NULL) {
*rahead = 0;
rahead = NULL;
/*
* Avoid a deadlock with update_pages(). We need to
* hold the range lock when copying from the DMU, so
* give up the busy lock to allow update_pages() to
* proceed. We might need to allocate new pages, which
* isn't quite right since this allocation isn't subject
* to the page fault handler's OOM logic, but this is
* the best we can do for now.
*/
for (int i = 0; i < count; i++) {
ASSERT(vm_page_none_valid(ma[i]));
vm_page_xunbusy(ma[i]);
}
if (rbehind != NULL) {
*rbehind = 0;
rbehind = NULL;
}
break;
lr = zfs_rangelock_enter(&zp->z_rangelock,
rounddown(start, blksz), len, RL_READER);
zfs_vmobject_wlock(object);
(void) vm_page_grab_pages(object, OFF_TO_IDX(start),
VM_ALLOC_NORMAL | VM_ALLOC_WAITOK | VM_ALLOC_ZERO,
ma, count);
zfs_vmobject_wunlock(object);
}
if (blksz == zp->z_blksz)
break;
zfs_rangelock_exit(lr);
}
object = ma[0]->object;
zfs_vmobject_wlock(object);
obj_size = object->un_pager.vnp.vnp_size;
zfs_vmobject_wunlock(object);
if (IDX_TO_OFF(ma[count - 1]->pindex) >= obj_size) {
if (lr != NULL)
zfs_rangelock_exit(lr);
zfs_rangelock_exit(lr);
zfs_exit(zfsvfs, FTAG);
return (zfs_vm_pagerret_bad);
}
@ -3987,11 +4008,33 @@ zfs_getpages(struct vnode *vp, vm_page_t *ma, int count, int *rbehind,
* ZFS will panic if we request DMU to read beyond the end of the last
* allocated block.
*/
error = dmu_read_pages(zfsvfs->z_os, zp->z_id, ma, count, &pgsin_b,
&pgsin_a, MIN(end, obj_size) - (end - PAGE_SIZE));
for (int i = 0; i < count; i++) {
int dummypgsin, count1, j, last_size;
if (lr != NULL)
zfs_rangelock_exit(lr);
if (vm_page_any_valid(ma[i])) {
ASSERT(vm_page_all_valid(ma[i]));
continue;
}
for (j = i + 1; j < count; j++) {
if (vm_page_any_valid(ma[j])) {
ASSERT(vm_page_all_valid(ma[j]));
break;
}
}
count1 = j - i;
dummypgsin = 0;
last_size = j == count ?
MIN(end, obj_size) - (end - PAGE_SIZE) : PAGE_SIZE;
error = dmu_read_pages(zfsvfs->z_os, zp->z_id, &ma[i], count1,
i == 0 ? &pgsin_b : &dummypgsin,
j == count ? &pgsin_a : &dummypgsin,
last_size);
if (error != 0)
break;
i += count1 - 1;
}
zfs_rangelock_exit(lr);
ZFS_ACCESSTIME_STAMP(zfsvfs, zp);
dataset_kstats_update_read_kstats(&zfsvfs->z_kstat, count*PAGE_SIZE);
@ -6159,7 +6202,7 @@ zfs_freebsd_copy_file_range(struct vop_copy_file_range_args *ap)
} else {
#if (__FreeBSD_version >= 1302506 && __FreeBSD_version < 1400000) || \
__FreeBSD_version >= 1400086
vn_lock_pair(invp, false, LK_EXCLUSIVE, outvp, false,
vn_lock_pair(invp, false, LK_SHARED, outvp, false,
LK_EXCLUSIVE);
#else
vn_lock_pair(invp, false, outvp, false);

View File

@ -375,7 +375,18 @@ zpl_prune_sb(uint64_t nr_to_scan, void *arg)
struct super_block *sb = (struct super_block *)arg;
int objects = 0;
(void) -zfs_prune(sb, nr_to_scan, &objects);
/*
* deactivate_locked_super calls shrinker_free and only then
* sops->kill_sb cb, resulting in UAF on umount when trying to reach
* for the shrinker functions in zpl_prune_sb of in-umount dataset.
* Increment if s_active is not zero, but don't prune if it is -
* umount could be underway.
*/
if (atomic_inc_not_zero(&sb->s_active)) {
(void) -zfs_prune(sb, nr_to_scan, &objects);
atomic_dec(&sb->s_active);
}
}
const struct super_operations zpl_super_operations = {

View File

@ -1176,7 +1176,7 @@ zvol_queue_limits_init(zvol_queue_limits_t *limits, zvol_state_t *zv,
limits->zql_max_segment_size = UINT_MAX;
}
limits->zql_io_opt = zv->zv_volblocksize;
limits->zql_io_opt = DMU_MAX_ACCESS / 2;
limits->zql_physical_block_size = zv->zv_volblocksize;
limits->zql_max_discard_sectors =

View File

@ -1074,12 +1074,9 @@ buf_hash_insert(arc_buf_hdr_t *hdr, kmutex_t **lockp)
ARCSTAT_BUMP(arcstat_hash_collisions);
if (i == 1)
ARCSTAT_BUMP(arcstat_hash_chains);
ARCSTAT_MAX(arcstat_hash_chain_max, i);
}
uint64_t he = atomic_inc_64_nv(
&arc_stats.arcstat_hash_elements.value.ui64);
ARCSTAT_MAX(arcstat_hash_elements_max, he);
ARCSTAT_BUMP(arcstat_hash_elements);
return (NULL);
}
@ -1103,8 +1100,7 @@ buf_hash_remove(arc_buf_hdr_t *hdr)
arc_hdr_clear_flags(hdr, ARC_FLAG_IN_HASH_TABLE);
/* collect some hash table performance data */
atomic_dec_64(&arc_stats.arcstat_hash_elements.value.ui64);
ARCSTAT_BUMPDOWN(arcstat_hash_elements);
if (buf_hash_table.ht_table[idx] &&
buf_hash_table.ht_table[idx]->b_hash_next == NULL)
ARCSTAT_BUMPDOWN(arcstat_hash_chains);
@ -7008,6 +7004,9 @@ arc_kstat_update(kstat_t *ksp, int rw)
wmsum_value(&arc_sums.arcstat_evict_l2_ineligible);
as->arcstat_evict_l2_skip.value.ui64 =
wmsum_value(&arc_sums.arcstat_evict_l2_skip);
as->arcstat_hash_elements.value.ui64 =
as->arcstat_hash_elements_max.value.ui64 =
wmsum_value(&arc_sums.arcstat_hash_elements);
as->arcstat_hash_collisions.value.ui64 =
wmsum_value(&arc_sums.arcstat_hash_collisions);
as->arcstat_hash_chains.value.ui64 =
@ -7432,6 +7431,7 @@ arc_state_init(void)
wmsum_init(&arc_sums.arcstat_evict_l2_eligible_mru, 0);
wmsum_init(&arc_sums.arcstat_evict_l2_ineligible, 0);
wmsum_init(&arc_sums.arcstat_evict_l2_skip, 0);
wmsum_init(&arc_sums.arcstat_hash_elements, 0);
wmsum_init(&arc_sums.arcstat_hash_collisions, 0);
wmsum_init(&arc_sums.arcstat_hash_chains, 0);
aggsum_init(&arc_sums.arcstat_size, 0);
@ -7590,6 +7590,7 @@ arc_state_fini(void)
wmsum_fini(&arc_sums.arcstat_evict_l2_eligible_mru);
wmsum_fini(&arc_sums.arcstat_evict_l2_ineligible);
wmsum_fini(&arc_sums.arcstat_evict_l2_skip);
wmsum_fini(&arc_sums.arcstat_hash_elements);
wmsum_fini(&arc_sums.arcstat_hash_collisions);
wmsum_fini(&arc_sums.arcstat_hash_chains);
aggsum_fini(&arc_sums.arcstat_size);
@ -9287,6 +9288,14 @@ skip:
hdr->b_l2hdr.b_hits = 0;
hdr->b_l2hdr.b_arcs_state =
hdr->b_l1hdr.b_state->arcs_state;
arc_hdr_set_flags(hdr, ARC_FLAG_HAS_L2HDR |
ARC_FLAG_L2_WRITING);
(void) zfs_refcount_add_many(&dev->l2ad_alloc,
arc_hdr_size(hdr), hdr);
l2arc_hdr_arcstats_increment(hdr);
vdev_space_update(dev->l2ad_vdev, asize, 0, 0);
mutex_enter(&dev->l2ad_mtx);
if (pio == NULL) {
/*
@ -9298,12 +9307,6 @@ skip:
}
list_insert_head(&dev->l2ad_buflist, hdr);
mutex_exit(&dev->l2ad_mtx);
arc_hdr_set_flags(hdr, ARC_FLAG_HAS_L2HDR |
ARC_FLAG_L2_WRITING);
(void) zfs_refcount_add_many(&dev->l2ad_alloc,
arc_hdr_size(hdr), hdr);
l2arc_hdr_arcstats_increment(hdr);
boolean_t commit = l2arc_log_blk_insert(dev, hdr);
mutex_exit(hash_lock);
@ -9333,7 +9336,6 @@ skip:
write_psize += psize;
write_asize += asize;
dev->l2ad_hand += asize;
vdev_space_update(dev->l2ad_vdev, asize, 0, 0);
if (commit) {
/* l2ad_hand will be adjusted inside. */
@ -9844,6 +9846,37 @@ l2arc_spa_rebuild_start(spa_t *spa)
}
}
void
l2arc_spa_rebuild_stop(spa_t *spa)
{
ASSERT(MUTEX_HELD(&spa_namespace_lock) ||
spa->spa_export_thread == curthread);
for (int i = 0; i < spa->spa_l2cache.sav_count; i++) {
l2arc_dev_t *dev =
l2arc_vdev_get(spa->spa_l2cache.sav_vdevs[i]);
if (dev == NULL)
continue;
mutex_enter(&l2arc_rebuild_thr_lock);
dev->l2ad_rebuild_cancel = B_TRUE;
mutex_exit(&l2arc_rebuild_thr_lock);
}
for (int i = 0; i < spa->spa_l2cache.sav_count; i++) {
l2arc_dev_t *dev =
l2arc_vdev_get(spa->spa_l2cache.sav_vdevs[i]);
if (dev == NULL)
continue;
mutex_enter(&l2arc_rebuild_thr_lock);
if (dev->l2ad_rebuild_began == B_TRUE) {
while (dev->l2ad_rebuild == B_TRUE) {
cv_wait(&l2arc_rebuild_thr_cv,
&l2arc_rebuild_thr_lock);
}
}
mutex_exit(&l2arc_rebuild_thr_lock);
}
}
/*
* Main entry point for L2ARC rebuilding.
*/
@ -9852,12 +9885,12 @@ l2arc_dev_rebuild_thread(void *arg)
{
l2arc_dev_t *dev = arg;
VERIFY(!dev->l2ad_rebuild_cancel);
VERIFY(dev->l2ad_rebuild);
(void) l2arc_rebuild(dev);
mutex_enter(&l2arc_rebuild_thr_lock);
dev->l2ad_rebuild_began = B_FALSE;
dev->l2ad_rebuild = B_FALSE;
cv_signal(&l2arc_rebuild_thr_cv);
mutex_exit(&l2arc_rebuild_thr_lock);
thread_exit();
@ -10008,8 +10041,6 @@ l2arc_rebuild(l2arc_dev_t *dev)
for (;;) {
mutex_enter(&l2arc_rebuild_thr_lock);
if (dev->l2ad_rebuild_cancel) {
dev->l2ad_rebuild = B_FALSE;
cv_signal(&l2arc_rebuild_thr_cv);
mutex_exit(&l2arc_rebuild_thr_lock);
err = SET_ERROR(ECANCELED);
goto out;
@ -10585,6 +10616,8 @@ l2arc_log_blk_commit(l2arc_dev_t *dev, zio_t *pio, l2arc_write_callback_t *cb)
(void) zio_nowait(wzio);
dev->l2ad_hand += asize;
vdev_space_update(dev->l2ad_vdev, asize, 0, 0);
/*
* Include the committed log block's pointer in the list of pointers
* to log blocks present in the L2ARC device.
@ -10598,7 +10631,6 @@ l2arc_log_blk_commit(l2arc_dev_t *dev, zio_t *pio, l2arc_write_callback_t *cb)
zfs_refcount_add_many(&dev->l2ad_lb_asize, asize, lb_ptr_buf);
zfs_refcount_add(&dev->l2ad_lb_count, lb_ptr_buf);
mutex_exit(&dev->l2ad_mtx);
vdev_space_update(dev->l2ad_vdev, asize, 0, 0);
/* bump the kstats */
ARCSTAT_INCR(arcstat_l2_write_bytes, asize);

File diff suppressed because it is too large Load Diff

View File

@ -89,7 +89,6 @@ typedef struct dbuf_stats {
kstat_named_t hash_misses;
kstat_named_t hash_collisions;
kstat_named_t hash_elements;
kstat_named_t hash_elements_max;
/*
* Number of sublists containing more than one dbuf in the dbuf
* hash table. Keep track of the longest hash chain.
@ -134,7 +133,6 @@ dbuf_stats_t dbuf_stats = {
{ "hash_misses", KSTAT_DATA_UINT64 },
{ "hash_collisions", KSTAT_DATA_UINT64 },
{ "hash_elements", KSTAT_DATA_UINT64 },
{ "hash_elements_max", KSTAT_DATA_UINT64 },
{ "hash_chains", KSTAT_DATA_UINT64 },
{ "hash_chain_max", KSTAT_DATA_UINT64 },
{ "hash_insert_race", KSTAT_DATA_UINT64 },
@ -154,6 +152,7 @@ struct {
wmsum_t hash_hits;
wmsum_t hash_misses;
wmsum_t hash_collisions;
wmsum_t hash_elements;
wmsum_t hash_chains;
wmsum_t hash_insert_race;
wmsum_t metadata_cache_count;
@ -432,8 +431,7 @@ dbuf_hash_insert(dmu_buf_impl_t *db)
db->db_hash_next = h->hash_table[idx];
h->hash_table[idx] = db;
mutex_exit(DBUF_HASH_MUTEX(h, idx));
uint64_t he = atomic_inc_64_nv(&dbuf_stats.hash_elements.value.ui64);
DBUF_STAT_MAX(hash_elements_max, he);
DBUF_STAT_BUMP(hash_elements);
return (NULL);
}
@ -506,7 +504,7 @@ dbuf_hash_remove(dmu_buf_impl_t *db)
h->hash_table[idx]->db_hash_next == NULL)
DBUF_STAT_BUMPDOWN(hash_chains);
mutex_exit(DBUF_HASH_MUTEX(h, idx));
atomic_dec_64(&dbuf_stats.hash_elements.value.ui64);
DBUF_STAT_BUMPDOWN(hash_elements);
}
typedef enum {
@ -903,6 +901,8 @@ dbuf_kstat_update(kstat_t *ksp, int rw)
wmsum_value(&dbuf_sums.hash_misses);
ds->hash_collisions.value.ui64 =
wmsum_value(&dbuf_sums.hash_collisions);
ds->hash_elements.value.ui64 =
wmsum_value(&dbuf_sums.hash_elements);
ds->hash_chains.value.ui64 =
wmsum_value(&dbuf_sums.hash_chains);
ds->hash_insert_race.value.ui64 =
@ -1004,6 +1004,7 @@ dbuf_init(void)
wmsum_init(&dbuf_sums.hash_hits, 0);
wmsum_init(&dbuf_sums.hash_misses, 0);
wmsum_init(&dbuf_sums.hash_collisions, 0);
wmsum_init(&dbuf_sums.hash_elements, 0);
wmsum_init(&dbuf_sums.hash_chains, 0);
wmsum_init(&dbuf_sums.hash_insert_race, 0);
wmsum_init(&dbuf_sums.metadata_cache_count, 0);
@ -1077,6 +1078,7 @@ dbuf_fini(void)
wmsum_fini(&dbuf_sums.hash_hits);
wmsum_fini(&dbuf_sums.hash_misses);
wmsum_fini(&dbuf_sums.hash_collisions);
wmsum_fini(&dbuf_sums.hash_elements);
wmsum_fini(&dbuf_sums.hash_chains);
wmsum_fini(&dbuf_sums.hash_insert_race);
wmsum_fini(&dbuf_sums.metadata_cache_count);
@ -2578,8 +2580,11 @@ dbuf_undirty(dmu_buf_impl_t *db, dmu_tx_t *tx)
* We are freeing a block that we cloned in the same
* transaction group.
*/
brt_pending_remove(dmu_objset_spa(db->db_objset),
&dr->dt.dl.dr_overridden_by, tx);
blkptr_t *bp = &dr->dt.dl.dr_overridden_by;
if (!BP_IS_HOLE(bp) && !BP_IS_EMBEDDED(bp)) {
brt_pending_remove(dmu_objset_spa(db->db_objset),
bp, tx);
}
}
dnode_t *dn = dr->dr_dnode;

View File

@ -68,6 +68,7 @@
#include <sys/zio_compress.h>
#include <zfs_fletcher.h>
#include <sys/zio_checksum.h>
#include <sys/brt.h>
/*
* The SPA supports block sizes up to 16MB. However, very large blocks
@ -289,8 +290,26 @@ dsl_dataset_block_kill(dsl_dataset_t *ds, const blkptr_t *bp, dmu_tx_t *tx,
if (BP_GET_LOGICAL_BIRTH(bp) > dsl_dataset_phys(ds)->ds_prev_snap_txg) {
int64_t delta;
dprintf_bp(bp, "freeing ds=%llu", (u_longlong_t)ds->ds_object);
dsl_free(tx->tx_pool, tx->tx_txg, bp);
/*
* Put blocks that would create IO on the pool's deadlist for
* dsl_process_async_destroys() to find. This is to prevent
* zio_free() from creating a ZIO_TYPE_FREE IO for them, which
* are very heavy and can lead to out-of-memory conditions if
* something tries to free millions of blocks on the same txg.
*/
boolean_t defer = spa_version(spa) >= SPA_VERSION_DEADLISTS &&
(BP_IS_GANG(bp) || BP_GET_DEDUP(bp) ||
brt_maybe_exists(spa, bp));
if (defer) {
dprintf_bp(bp, "putting on free list: %s", "");
bpobj_enqueue(&ds->ds_dir->dd_pool->dp_free_bpobj,
bp, B_FALSE, tx);
} else {
dprintf_bp(bp, "freeing ds=%llu",
(u_longlong_t)ds->ds_object);
dsl_free(tx->tx_pool, tx->tx_txg, bp);
}
mutex_enter(&ds->ds_lock);
ASSERT(dsl_dataset_phys(ds)->ds_unique_bytes >= used ||
@ -298,9 +317,14 @@ dsl_dataset_block_kill(dsl_dataset_t *ds, const blkptr_t *bp, dmu_tx_t *tx,
delta = parent_delta(ds, -used);
dsl_dataset_phys(ds)->ds_unique_bytes -= used;
mutex_exit(&ds->ds_lock);
dsl_dir_diduse_transfer_space(ds->ds_dir,
delta, -compressed, -uncompressed, -used,
DD_USED_REFRSRV, DD_USED_HEAD, tx);
if (defer)
dsl_dir_diduse_space(tx->tx_pool->dp_free_dir,
DD_USED_HEAD, used, compressed, uncompressed, tx);
} else {
dprintf_bp(bp, "putting on dead list: %s", "");
if (async) {

View File

@ -2081,6 +2081,7 @@ spa_unload(spa_t *spa)
vdev_trim_stop_all(root_vdev, VDEV_TRIM_ACTIVE);
vdev_autotrim_stop_all(spa);
vdev_rebuild_stop_all(spa);
l2arc_spa_rebuild_stop(spa);
}
}
@ -7115,6 +7116,7 @@ spa_export_common(const char *pool, int new_state, nvlist_t **oldconfig,
vdev_trim_stop_all(rvd, VDEV_TRIM_ACTIVE);
vdev_autotrim_stop_all(spa);
vdev_rebuild_stop_all(spa);
l2arc_spa_rebuild_stop(spa);
/*
* We want this to be reflected on every label,

View File

@ -1870,13 +1870,7 @@ spa_get_slop_space(spa_t *spa)
if (spa->spa_dedup_dspace == ~0ULL)
spa_update_dspace(spa);
/*
* spa_get_dspace() includes the space only logically "used" by
* deduplicated data, so since it's not useful to reserve more
* space with more deduplicated data, we subtract that out here.
*/
space =
spa_get_dspace(spa) - spa->spa_dedup_dspace - brt_get_dspace(spa);
space = spa->spa_rdspace;
slop = MIN(space >> spa_slop_shift, spa_max_slop);
/*
@ -1912,8 +1906,7 @@ spa_get_checkpoint_space(spa_t *spa)
void
spa_update_dspace(spa_t *spa)
{
spa->spa_dspace = metaslab_class_get_dspace(spa_normal_class(spa)) +
ddt_get_dedup_dspace(spa) + brt_get_dspace(spa);
spa->spa_rdspace = metaslab_class_get_dspace(spa_normal_class(spa));
if (spa->spa_nonallocating_dspace > 0) {
/*
* Subtract the space provided by all non-allocating vdevs that
@ -1933,9 +1926,11 @@ spa_update_dspace(spa_t *spa)
* doesn't matter that the data we are moving may be
* allocated twice (on the old device and the new device).
*/
ASSERT3U(spa->spa_dspace, >=, spa->spa_nonallocating_dspace);
spa->spa_dspace -= spa->spa_nonallocating_dspace;
ASSERT3U(spa->spa_rdspace, >=, spa->spa_nonallocating_dspace);
spa->spa_rdspace -= spa->spa_nonallocating_dspace;
}
spa->spa_dspace = spa->spa_rdspace + ddt_get_dedup_dspace(spa) +
brt_get_dspace(spa);
}
/*

View File

@ -248,20 +248,63 @@ zap_leaf_array_create(zap_leaf_t *l, const char *buf,
return (chunk_head);
}
static void
zap_leaf_array_free(zap_leaf_t *l, uint16_t *chunkp)
/*
* Non-destructively copy array between leaves.
*/
static uint16_t
zap_leaf_array_copy(zap_leaf_t *l, uint16_t chunk, zap_leaf_t *nl)
{
uint16_t chunk = *chunkp;
*chunkp = CHAIN_END;
uint16_t new_chunk;
uint16_t *nchunkp = &new_chunk;
while (chunk != CHAIN_END) {
uint_t nextchunk = ZAP_LEAF_CHUNK(l, chunk).l_array.la_next;
ASSERT3U(ZAP_LEAF_CHUNK(l, chunk).l_array.la_type, ==,
ZAP_CHUNK_ARRAY);
zap_leaf_chunk_free(l, chunk);
chunk = nextchunk;
ASSERT3U(chunk, <, ZAP_LEAF_NUMCHUNKS(l));
uint16_t nchunk = zap_leaf_chunk_alloc(nl);
struct zap_leaf_array *la =
&ZAP_LEAF_CHUNK(l, chunk).l_array;
struct zap_leaf_array *nla =
&ZAP_LEAF_CHUNK(nl, nchunk).l_array;
ASSERT3U(la->la_type, ==, ZAP_CHUNK_ARRAY);
*nla = *la; /* structure assignment */
chunk = la->la_next;
*nchunkp = nchunk;
nchunkp = &nla->la_next;
}
*nchunkp = CHAIN_END;
return (new_chunk);
}
/*
* Free array. Unlike trivial loop of zap_leaf_chunk_free() this does
* not reverse order of chunks in the free list, reducing fragmentation.
*/
static void
zap_leaf_array_free(zap_leaf_t *l, uint16_t chunk)
{
struct zap_leaf_header *hdr = &zap_leaf_phys(l)->l_hdr;
uint16_t *tailp = &hdr->lh_freelist;
uint16_t oldfree = *tailp;
while (chunk != CHAIN_END) {
ASSERT3U(chunk, <, ZAP_LEAF_NUMCHUNKS(l));
zap_leaf_chunk_t *c = &ZAP_LEAF_CHUNK(l, chunk);
ASSERT3U(c->l_array.la_type, ==, ZAP_CHUNK_ARRAY);
*tailp = chunk;
chunk = c->l_array.la_next;
c->l_free.lf_type = ZAP_CHUNK_FREE;
memset(c->l_free.lf_pad, 0, sizeof (c->l_free.lf_pad));
tailp = &c->l_free.lf_next;
ASSERT3U(hdr->lh_nfree, <, ZAP_LEAF_NUMCHUNKS(l));
hdr->lh_nfree++;
}
*tailp = oldfree;
}
/* array_len and buf_len are in integers, not bytes */
@ -515,7 +558,7 @@ zap_entry_update(zap_entry_handle_t *zeh,
if ((int)zap_leaf_phys(l)->l_hdr.lh_nfree < delta_chunks)
return (SET_ERROR(EAGAIN));
zap_leaf_array_free(l, &le->le_value_chunk);
zap_leaf_array_free(l, le->le_value_chunk);
le->le_value_chunk =
zap_leaf_array_create(l, buf, integer_size, num_integers);
le->le_value_numints = num_integers;
@ -534,10 +577,11 @@ zap_entry_remove(zap_entry_handle_t *zeh)
struct zap_leaf_entry *le = ZAP_LEAF_ENTRY(l, entry_chunk);
ASSERT3U(le->le_type, ==, ZAP_CHUNK_ENTRY);
zap_leaf_array_free(l, &le->le_name_chunk);
zap_leaf_array_free(l, &le->le_value_chunk);
*zeh->zeh_chunkp = le->le_next;
/* Free in opposite order to reduce fragmentation. */
zap_leaf_array_free(l, le->le_value_chunk);
zap_leaf_array_free(l, le->le_name_chunk);
zap_leaf_chunk_free(l, entry_chunk);
zap_leaf_phys(l)->l_hdr.lh_nentries--;
@ -701,34 +745,6 @@ zap_leaf_rehash_entry(zap_leaf_t *l, struct zap_leaf_entry *le, uint16_t entry)
return (chunkp);
}
static uint16_t
zap_leaf_transfer_array(zap_leaf_t *l, uint16_t chunk, zap_leaf_t *nl)
{
uint16_t new_chunk;
uint16_t *nchunkp = &new_chunk;
while (chunk != CHAIN_END) {
uint16_t nchunk = zap_leaf_chunk_alloc(nl);
struct zap_leaf_array *nla =
&ZAP_LEAF_CHUNK(nl, nchunk).l_array;
struct zap_leaf_array *la =
&ZAP_LEAF_CHUNK(l, chunk).l_array;
uint_t nextchunk = la->la_next;
ASSERT3U(chunk, <, ZAP_LEAF_NUMCHUNKS(l));
ASSERT3U(nchunk, <, ZAP_LEAF_NUMCHUNKS(l));
*nla = *la; /* structure assignment */
zap_leaf_chunk_free(l, chunk);
chunk = nextchunk;
*nchunkp = nchunk;
nchunkp = &nla->la_next;
}
*nchunkp = CHAIN_END;
return (new_chunk);
}
static void
zap_leaf_transfer_entry(zap_leaf_t *l, uint_t entry, zap_leaf_t *nl)
{
@ -741,10 +757,12 @@ zap_leaf_transfer_entry(zap_leaf_t *l, uint_t entry, zap_leaf_t *nl)
(void) zap_leaf_rehash_entry(nl, nle, chunk);
nle->le_name_chunk = zap_leaf_transfer_array(l, le->le_name_chunk, nl);
nle->le_value_chunk =
zap_leaf_transfer_array(l, le->le_value_chunk, nl);
nle->le_name_chunk = zap_leaf_array_copy(l, le->le_name_chunk, nl);
nle->le_value_chunk = zap_leaf_array_copy(l, le->le_value_chunk, nl);
/* Free in opposite order to reduce fragmentation. */
zap_leaf_array_free(l, le->le_value_chunk);
zap_leaf_array_free(l, le->le_name_chunk);
zap_leaf_chunk_free(l, entry);
zap_leaf_phys(l)->l_hdr.lh_nentries--;

View File

@ -1227,6 +1227,21 @@ zap_lookup_norm_by_dnode(dnode_t *dn, const char *name,
return (err);
}
static int
zap_prefetch_uint64_impl(zap_t *zap, const uint64_t *key, int key_numints)
{
zap_name_t *zn = zap_name_alloc_uint64(zap, key, key_numints);
if (zn == NULL) {
zap_unlockdir(zap, FTAG);
return (SET_ERROR(ENOTSUP));
}
fzap_prefetch(zn);
zap_name_free(zn);
zap_unlockdir(zap, FTAG);
return (0);
}
int
zap_prefetch_uint64(objset_t *os, uint64_t zapobj, const uint64_t *key,
int key_numints)
@ -1237,13 +1252,37 @@ zap_prefetch_uint64(objset_t *os, uint64_t zapobj, const uint64_t *key,
zap_lockdir(os, zapobj, NULL, RW_READER, TRUE, FALSE, FTAG, &zap);
if (err != 0)
return (err);
err = zap_prefetch_uint64_impl(zap, key, key_numints);
/* zap_prefetch_uint64_impl() calls zap_unlockdir() */
return (err);
}
int
zap_prefetch_uint64_by_dnode(dnode_t *dn, const uint64_t *key, int key_numints)
{
zap_t *zap;
int err =
zap_lockdir_by_dnode(dn, NULL, RW_READER, TRUE, FALSE, FTAG, &zap);
if (err != 0)
return (err);
err = zap_prefetch_uint64_impl(zap, key, key_numints);
/* zap_prefetch_uint64_impl() calls zap_unlockdir() */
return (err);
}
static int
zap_lookup_uint64_impl(zap_t *zap, const uint64_t *key,
int key_numints, uint64_t integer_size, uint64_t num_integers, void *buf)
{
zap_name_t *zn = zap_name_alloc_uint64(zap, key, key_numints);
if (zn == NULL) {
zap_unlockdir(zap, FTAG);
return (SET_ERROR(ENOTSUP));
}
fzap_prefetch(zn);
int err = fzap_lookup(zn, integer_size, num_integers, buf,
NULL, 0, NULL);
zap_name_free(zn);
zap_unlockdir(zap, FTAG);
return (err);
@ -1259,16 +1298,25 @@ zap_lookup_uint64(objset_t *os, uint64_t zapobj, const uint64_t *key,
zap_lockdir(os, zapobj, NULL, RW_READER, TRUE, FALSE, FTAG, &zap);
if (err != 0)
return (err);
zap_name_t *zn = zap_name_alloc_uint64(zap, key, key_numints);
if (zn == NULL) {
zap_unlockdir(zap, FTAG);
return (SET_ERROR(ENOTSUP));
}
err = zap_lookup_uint64_impl(zap, key, key_numints, integer_size,
num_integers, buf);
/* zap_lookup_uint64_impl() calls zap_unlockdir() */
return (err);
}
err = fzap_lookup(zn, integer_size, num_integers, buf,
NULL, 0, NULL);
zap_name_free(zn);
zap_unlockdir(zap, FTAG);
int
zap_lookup_uint64_by_dnode(dnode_t *dn, const uint64_t *key,
int key_numints, uint64_t integer_size, uint64_t num_integers, void *buf)
{
zap_t *zap;
int err =
zap_lockdir_by_dnode(dn, NULL, RW_READER, TRUE, FALSE, FTAG, &zap);
if (err != 0)
return (err);
err = zap_lookup_uint64_impl(zap, key, key_numints, integer_size,
num_integers, buf);
/* zap_lookup_uint64_impl() calls zap_unlockdir() */
return (err);
}

View File

@ -2192,31 +2192,20 @@ zio_delay_interrupt(zio_t *zio)
} else {
taskqid_t tid;
hrtime_t diff = zio->io_target_timestamp - now;
clock_t expire_at_tick = ddi_get_lbolt() +
NSEC_TO_TICK(diff);
int ticks = MAX(1, NSEC_TO_TICK(diff));
clock_t expire_at_tick = ddi_get_lbolt() + ticks;
DTRACE_PROBE3(zio__delay__hit, zio_t *, zio,
hrtime_t, now, hrtime_t, diff);
if (NSEC_TO_TICK(diff) == 0) {
/* Our delay is less than a jiffy - just spin */
zfs_sleep_until(zio->io_target_timestamp);
zio_interrupt(zio);
} else {
tid = taskq_dispatch_delay(system_taskq, zio_interrupt,
zio, TQ_NOSLEEP, expire_at_tick);
if (tid == TASKQID_INVALID) {
/*
* Use taskq_dispatch_delay() in the place of
* OpenZFS's timeout_generic().
* Couldn't allocate a task. Just finish the
* zio without a delay.
*/
tid = taskq_dispatch_delay(system_taskq,
zio_interrupt, zio, TQ_NOSLEEP,
expire_at_tick);
if (tid == TASKQID_INVALID) {
/*
* Couldn't allocate a task. Just
* finish the zio without a delay.
*/
zio_interrupt(zio);
}
zio_interrupt(zio);
}
}
return;

View File

@ -160,6 +160,12 @@ abd_fletcher_4_byteswap(abd_t *abd, uint64_t size,
abd_fletcher_4_impl(abd, size, &acd);
}
/*
* Checksum vectors.
*
* Note: you cannot change the name string for these functions, as they are
* embedded in on-disk data in some places (eg dedup table names).
*/
zio_checksum_info_t zio_checksum_table[ZIO_CHECKSUM_FUNCTIONS] = {
{{NULL, NULL}, NULL, NULL, 0, "inherit"},
{{NULL, NULL}, NULL, NULL, 0, "on"},

View File

@ -44,10 +44,6 @@ static unsigned long zio_decompress_fail_fraction = 0;
/*
* Compression vectors.
*
* NOTE: DO NOT CHANGE THE NAMES OF THESE COMPRESSION FUNCTIONS.
* THEY ARE USED AS ZAP KEY NAMES BY FAST DEDUP AND THEREFORE
* PART OF THE ON-DISK FORMAT.
*/
zio_compress_info_t zio_compress_table[ZIO_COMPRESS_FUNCTIONS] = {
{"inherit", 0, NULL, NULL, NULL},

View File

@ -32,6 +32,7 @@ Requires(post): gcc, make, perl, diffutils
%if 0%{?rhel}%{?fedora}%{?mageia}%{?suse_version}%{?openEuler}
Requires: kernel-devel >= @ZFS_META_KVER_MIN@, kernel-devel <= @ZFS_META_KVER_MAX@.999
Requires(post): kernel-devel >= @ZFS_META_KVER_MIN@, kernel-devel <= @ZFS_META_KVER_MAX@.999
Conflicts: kernel-devel < @ZFS_META_KVER_MIN@, kernel-devel > @ZFS_META_KVER_MAX@.999
Obsoletes: spl-dkms <= %{version}
%endif
Provides: %{module}-kmod = %{version}

View File

@ -19,9 +19,13 @@
*/
#include <sys/ioctl.h>
#ifdef _KERNEL
#include <sys/fcntl.h>
#else
#include <fcntl.h>
#endif
#include <linux/fs.h>
#include <err.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

View File

@ -41,9 +41,11 @@ log_must zfs set compress=zle $TESTDSTFS
for prop in "${sync_prop_vals[@]}"; do
log_must zfs set sync=$prop $TESTSRCFS
# 15*8=120, which is greater than 113, so we are sure the data won't
# be embedded into BP.
# 32767*8=262136, which is larger than a single default recordsize of
# 131072.
FILESIZE=$(random_int_between 1 32767)
FILESIZE=$(random_int_between 15 32767)
FILESIZE=$((FILESIZE * 8))
bclone_test random $FILESIZE false $TESTSRCDIR $TESTSRCDIR
done
@ -52,9 +54,11 @@ for srcprop in "${sync_prop_vals[@]}"; do
log_must zfs set sync=$srcprop $TESTSRCFS
for dstprop in "${sync_prop_vals[@]}"; do
log_must zfs set sync=$dstprop $TESTDSTFS
# 15*8=120, which is greater than 113, so we are sure the data won't
# be embedded into BP.
# 32767*8=262136, which is larger than a single default recordsize of
# 131072.
FILESIZE=$(random_int_between 1 32767)
FILESIZE=$(random_int_between 15 32767)
FILESIZE=$((FILESIZE * 8))
bclone_test random $FILESIZE false $TESTSRCDIR $TESTDSTDIR
done

View File

@ -69,15 +69,16 @@ for raid_type in "draid2:3d:6c:1s" "raidz2"; do
log_mustnot eval "zpool status -e $TESTPOOL2 | grep ONLINE"
# Check no ONLINE slow vdevs are show. Then mark IOs greater than
# 160ms slow, delay IOs 320ms to vdev6, check slow IOs.
# 750ms slow, delay IOs 1000ms to vdev6, check slow IOs.
log_must check_vdev_state $TESTPOOL2 $TESTDIR/vdev6 "ONLINE"
log_mustnot eval "zpool status -es $TESTPOOL2 | grep ONLINE"
log_must set_tunable64 ZIO_SLOW_IO_MS 160
log_must zinject -d $TESTDIR/vdev6 -D320:100 $TESTPOOL2
log_must set_tunable64 ZIO_SLOW_IO_MS 750
log_must zinject -d $TESTDIR/vdev6 -D1000:100 $TESTPOOL2
log_must mkfile 1048576 /$TESTPOOL2/testfile
sync_pool $TESTPOOL2
log_must set_tunable64 ZIO_SLOW_IO_MS $OLD_SLOW_IO
log_must zinject -c all
# Check vdev6 slow IOs are only shown when requested with -s.
log_mustnot eval "zpool status -e $TESTPOOL2 | grep $TESTDIR/vdev6 | grep ONLINE"
@ -95,10 +96,9 @@ for raid_type in "draid2:3d:6c:1s" "raidz2"; do
log_mustnot eval "zpool status -es $TESTPOOL2 | grep $TESTDIR/vdev2 | grep ONLINE"
log_mustnot eval "zpool status -es $TESTPOOL2 | grep $TESTDIR/vdev3 | grep ONLINE"
log_must zinject -c all
log_must zpool status -es $TESTPOOL2
zpool destroy $TESTPOOL2
log_must zpool destroy $TESTPOOL2
done
log_pass "Verify zpool status -e shows only unhealthy vdevs"

View File

@ -1707,9 +1707,10 @@ s32 e1000_setup_copper_link_generic(struct e1000_hw *hw)
* autonegotiation.
*/
ret_val = e1000_copper_link_autoneg(hw);
if (ret_val)
if (ret_val && !hw->mac.forced_speed_duplex)
return ret_val;
} else {
}
if (!hw->mac.autoneg || (ret_val && hw->mac.forced_speed_duplex)) {
/* PHY will be set to 10H, 10F, 100H or 100F
* depending on user settings.
*/

View File

@ -108,16 +108,19 @@ em_dump_rs(struct e1000_softc *sc)
cur = txr->tx_rsq[rs_cidx];
status = txr->tx_base[cur].upper.fields.status;
if (!(status & E1000_TXD_STAT_DD))
printf("qid[%d]->tx_rsq[%d]: %d clear ", qid, rs_cidx, cur);
printf("qid[%d]->tx_rsq[%d]: %d clear ",
qid, rs_cidx, cur);
} else {
rs_cidx = (rs_cidx-1)&(ntxd-1);
cur = txr->tx_rsq[rs_cidx];
printf("qid[%d]->tx_rsq[rs_cidx-1=%d]: %d ", qid, rs_cidx, cur);
printf("qid[%d]->tx_rsq[rs_cidx-1=%d]: %d ",
qid, rs_cidx, cur);
}
printf("cidx_prev=%d rs_pidx=%d ",txr->tx_cidx_processed,
txr->tx_rs_pidx);
for (i = 0; i < ntxd; i++) {
if (txr->tx_base[i].upper.fields.status & E1000_TXD_STAT_DD)
if (txr->tx_base[i].upper.fields.status &
E1000_TXD_STAT_DD)
printf("%d set ", i);
}
printf("\n");
@ -143,8 +146,8 @@ em_tso_setup(struct e1000_softc *sc, if_pkt_info_t pi, uint32_t *txd_upper,
hdr_len = pi->ipi_ehdrlen + pi->ipi_ip_hlen + pi->ipi_tcp_hlen;
*txd_lower = (E1000_TXD_CMD_DEXT | /* Extended descr type */
E1000_TXD_DTYP_D | /* Data descr type */
E1000_TXD_CMD_TSE); /* Do TSE on this packet */
E1000_TXD_DTYP_D | /* Data descr type */
E1000_TXD_CMD_TSE); /* Do TSE on this packet */
cur = pi->ipi_pidx;
TXD = (struct e1000_context_desc *)&txr->tx_base[cur];
@ -157,7 +160,8 @@ em_tso_setup(struct e1000_softc *sc, if_pkt_info_t pi, uint32_t *txd_upper,
switch(pi->ipi_etype) {
case ETHERTYPE_IP:
/* IP and/or TCP header checksum calculation and insertion. */
*txd_upper = (E1000_TXD_POPTS_IXSM | E1000_TXD_POPTS_TXSM) << 8;
*txd_upper =
(E1000_TXD_POPTS_IXSM | E1000_TXD_POPTS_TXSM) << 8;
TXD->lower_setup.ip_fields.ipcse =
htole16(pi->ipi_ehdrlen + pi->ipi_ip_hlen - 1);
@ -183,7 +187,8 @@ em_tso_setup(struct e1000_softc *sc, if_pkt_info_t pi, uint32_t *txd_upper,
TXD->upper_setup.tcp_fields.tucss = pi->ipi_ehdrlen + pi->ipi_ip_hlen;
TXD->upper_setup.tcp_fields.tucse = 0;
TXD->upper_setup.tcp_fields.tucso =
pi->ipi_ehdrlen + pi->ipi_ip_hlen + offsetof(struct tcphdr, th_sum);
pi->ipi_ehdrlen + pi->ipi_ip_hlen +
offsetof(struct tcphdr, th_sum);
/*
* Payload size per packet w/o any headers.
@ -211,8 +216,8 @@ em_tso_setup(struct e1000_softc *sc, if_pkt_info_t pi, uint32_t *txd_upper,
if (++cur == scctx->isc_ntxd[0]) {
cur = 0;
}
DPRINTF(iflib_get_dev(sc->ctx), "%s: pidx: %d cur: %d\n", __FUNCTION__,
pi->ipi_pidx, cur);
DPRINTF(iflib_get_dev(sc->ctx), "%s: pidx: %d cur: %d\n",
__FUNCTION__, pi->ipi_pidx, cur);
return (cur);
}
@ -277,8 +282,8 @@ em_transmit_checksum_setup(struct e1000_softc *sc, if_pkt_info_t pi,
* ipcse - End offset for header checksum calculation.
* ipcso - Offset of place to put the checksum.
*
* We set ipcsX values regardless of IP version to work around HW issues
* and ipcse must be 0 for IPv6 per "PCIe GbE SDM 2.5" page 61.
* We set ipcsX values regardless of IP version to work around HW
* issues and ipcse must be 0 for IPv6 per "PCIe GbE SDM 2.5" page 61.
* IXSM controls whether it's inserted.
*/
TXD->lower_setup.ip_fields.ipcss = pi->ipi_ehdrlen;
@ -296,7 +301,8 @@ em_transmit_checksum_setup(struct e1000_softc *sc, if_pkt_info_t pi,
* tucse - End offset for payload checksum calculation.
* tucso - Offset of place to put the checksum.
*/
if (csum_flags & (CSUM_TCP | CSUM_UDP | CSUM_IP6_TCP | CSUM_IP6_UDP)) {
if (csum_flags & (CSUM_TCP | CSUM_UDP | CSUM_IP6_TCP |
CSUM_IP6_UDP)) {
uint8_t tucso;
*txd_upper |= E1000_TXD_POPTS_TXSM << 8;
@ -326,7 +332,8 @@ em_transmit_checksum_setup(struct e1000_softc *sc, if_pkt_info_t pi,
cur = 0;
}
DPRINTF(iflib_get_dev(sc->ctx),
"checksum_setup csum_flags=%x txd_upper=%x txd_lower=%x hdr_len=%d cmd=%x\n",
"checksum_setup csum_flags=%x txd_upper=%x txd_lower=%x"
" hdr_len=%d cmd=%x\n",
csum_flags, *txd_upper, *txd_lower, hdr_len, cmd);
return (cur);
}
@ -372,7 +379,8 @@ em_isc_txd_encap(void *arg, if_pkt_info_t pi)
i = em_tso_setup(sc, pi, &txd_upper, &txd_lower);
tso_desc = true;
} else if (csum_flags & EM_CSUM_OFFLOAD) {
i = em_transmit_checksum_setup(sc, pi, &txd_upper, &txd_lower);
i = em_transmit_checksum_setup(sc, pi, &txd_upper,
&txd_lower);
}
if (pi->ipi_mflags & M_VLANTAG) {
@ -414,7 +422,8 @@ em_isc_txd_encap(void *arg, if_pkt_info_t pi)
/* Now make the sentinel */
ctxd = &txr->tx_base[i];
ctxd->buffer_addr = htole64(seg_addr + seg_len);
ctxd->lower.data = htole32(cmd | txd_lower | TSO_WORKAROUND);
ctxd->lower.data =
htole32(cmd | txd_lower | TSO_WORKAROUND);
ctxd->upper.data = htole32(txd_upper);
pidx_last = i;
if (++i == scctx->isc_ntxd[0])
@ -429,7 +438,8 @@ em_isc_txd_encap(void *arg, if_pkt_info_t pi)
pidx_last = i;
if (++i == scctx->isc_ntxd[0])
i = 0;
DPRINTF(iflib_get_dev(sc->ctx), "pidx_last=%d i=%d ntxd[0]=%d\n",
DPRINTF(iflib_get_dev(sc->ctx),
"pidx_last=%d i=%d ntxd[0]=%d\n",
pidx_last, i, scctx->isc_ntxd[0]);
}
}
@ -449,7 +459,8 @@ em_isc_txd_encap(void *arg, if_pkt_info_t pi)
}
ctxd->lower.data |= htole32(E1000_TXD_CMD_EOP | txd_flags);
DPRINTF(iflib_get_dev(sc->ctx),
"tx_buffers[%d]->eop = %d ipi_new_pidx=%d\n", first, pidx_last, i);
"tx_buffers[%d]->eop = %d ipi_new_pidx=%d\n",
first, pidx_last, i);
pi->ipi_new_pidx = i;
/* Sent data accounting for AIM */
@ -508,8 +519,8 @@ em_isc_txd_credits_update(void *arg, uint16_t txqid, bool clear)
delta += ntxd;
MPASS(delta > 0);
DPRINTF(iflib_get_dev(sc->ctx),
"%s: cidx_processed=%u cur=%u clear=%d delta=%d\n",
__FUNCTION__, prev, cur, clear, delta);
"%s: cidx_processed=%u cur=%u clear=%d delta=%d\n",
__FUNCTION__, prev, cur, clear, delta);
processed += delta;
prev = cur;
@ -699,7 +710,8 @@ lem_isc_rxd_pkt_get(void *arg, if_rxd_info_t ri)
if (scctx->isc_capenable & IFCAP_VLAN_HWTAGGING &&
status & E1000_RXD_STAT_VP) {
ri->iri_vtag = le16toh(rxd->special & E1000_RXD_SPC_VLAN_MASK);
ri->iri_vtag =
le16toh(rxd->special & E1000_RXD_SPC_VLAN_MASK);
ri->iri_flags |= M_VLANTAG;
}
@ -789,7 +801,8 @@ em_receive_checksum(uint16_t status, uint8_t errors, if_rxd_info_t ri)
return;
/* If there is a layer 3 or 4 error we are done */
if (__predict_false(errors & (E1000_RXD_ERR_IPE | E1000_RXD_ERR_TCPE)))
if (__predict_false(errors & (E1000_RXD_ERR_IPE |
E1000_RXD_ERR_TCPE)))
return;
/* IP Checksum Good */

File diff suppressed because it is too large Load Diff

View File

@ -102,14 +102,15 @@ igb_tso_setup(struct tx_ring *txr, if_pkt_info_t pi, uint32_t *cmd_type_len,
break;
default:
panic("%s: CSUM_TSO but no supported IP version (0x%04x)",
__func__, ntohs(pi->ipi_etype));
__func__, ntohs(pi->ipi_etype));
break;
}
TXD = (struct e1000_adv_tx_context_desc *) &txr->tx_base[pi->ipi_pidx];
TXD = (struct e1000_adv_tx_context_desc *)&txr->tx_base[pi->ipi_pidx];
/* This is used in the transmit desc in encap */
paylen = pi->ipi_len - pi->ipi_ehdrlen - pi->ipi_ip_hlen - pi->ipi_tcp_hlen;
paylen = pi->ipi_len - pi->ipi_ehdrlen - pi->ipi_ip_hlen -
pi->ipi_tcp_hlen;
/* VLAN MACLEN IPLEN */
if (pi->ipi_mflags & M_VLANTAG) {
@ -147,8 +148,8 @@ igb_tso_setup(struct tx_ring *txr, if_pkt_info_t pi, uint32_t *cmd_type_len,
*
**********************************************************************/
static int
igb_tx_ctx_setup(struct tx_ring *txr, if_pkt_info_t pi, uint32_t *cmd_type_len,
uint32_t *olinfo_status)
igb_tx_ctx_setup(struct tx_ring *txr, if_pkt_info_t pi,
uint32_t *cmd_type_len, uint32_t *olinfo_status)
{
struct e1000_adv_tx_context_desc *TXD;
struct e1000_softc *sc = txr->sc;
@ -164,7 +165,7 @@ igb_tx_ctx_setup(struct tx_ring *txr, if_pkt_info_t pi, uint32_t *cmd_type_len,
*olinfo_status |= pi->ipi_len << E1000_ADVTXD_PAYLEN_SHIFT;
/* Now ready a context descriptor */
TXD = (struct e1000_adv_tx_context_desc *) &txr->tx_base[pi->ipi_pidx];
TXD = (struct e1000_adv_tx_context_desc *)&txr->tx_base[pi->ipi_pidx];
/*
** In advanced descriptors the vlan tag must
@ -246,8 +247,8 @@ igb_isc_txd_encap(void *arg, if_pkt_info_t pi)
pidx_last = olinfo_status = 0;
/* Basic descriptor defines */
cmd_type_len = (E1000_ADVTXD_DTYP_DATA |
E1000_ADVTXD_DCMD_IFCS | E1000_ADVTXD_DCMD_DEXT);
cmd_type_len = (E1000_ADVTXD_DTYP_DATA | E1000_ADVTXD_DCMD_IFCS |
E1000_ADVTXD_DCMD_DEXT);
if (pi->ipi_mflags & M_VLANTAG)
cmd_type_len |= E1000_ADVTXD_DCMD_VLE;
@ -300,9 +301,9 @@ igb_isc_txd_encap(void *arg, if_pkt_info_t pi)
static void
igb_isc_txd_flush(void *arg, uint16_t txqid, qidx_t pidx)
{
struct e1000_softc *sc = arg;
struct em_tx_queue *que = &sc->tx_queues[txqid];
struct tx_ring *txr = &que->txr;
struct e1000_softc *sc = arg;
struct em_tx_queue *que = &sc->tx_queues[txqid];
struct tx_ring *txr = &que->txr;
E1000_WRITE_REG(&sc->hw, E1000_TDT(txr->me), pidx);
}
@ -351,7 +352,8 @@ igb_isc_txd_credits_update(void *arg, uint16_t txqid, bool clear)
if (rs_cidx == txr->tx_rs_pidx)
break;
cur = txr->tx_rsq[rs_cidx];
status = ((union e1000_adv_tx_desc *)&txr->tx_base[cur])->wb.status;
status = ((union e1000_adv_tx_desc *)
&txr->tx_base[cur])->wb.status;
} while ((status & E1000_TXD_STAT_DD));
txr->tx_rs_cidx = rs_cidx;
@ -387,7 +389,8 @@ igb_isc_rxd_refill(void *arg, if_rxd_update_t iru)
}
static void
igb_isc_rxd_flush(void *arg, uint16_t rxqid, uint8_t flid __unused, qidx_t pidx)
igb_isc_rxd_flush(void *arg, uint16_t rxqid, uint8_t flid __unused,
qidx_t pidx)
{
struct e1000_softc *sc = arg;
struct em_rx_queue *que = &sc->rx_queues[rxqid];
@ -453,7 +456,8 @@ igb_isc_rxd_pkt_get(void *arg, if_rxd_info_t ri)
MPASS ((staterr & E1000_RXD_STAT_DD) != 0);
len = le16toh(rxd->wb.upper.length);
ptype = le32toh(rxd->wb.lower.lo_dword.data) & IGB_PKTTYPE_MASK;
ptype =
le32toh(rxd->wb.lower.lo_dword.data) & IGB_PKTTYPE_MASK;
ri->iri_len += len;
rxr->rx_bytes += ri->iri_len;
@ -462,7 +466,8 @@ igb_isc_rxd_pkt_get(void *arg, if_rxd_info_t ri)
eop = ((staterr & E1000_RXD_STAT_EOP) == E1000_RXD_STAT_EOP);
/* Make sure bad packets are discarded */
if (eop && ((staterr & E1000_RXDEXT_ERR_FRAME_ERR_MASK) != 0)) {
if (eop &&
((staterr & E1000_RXDEXT_ERR_FRAME_ERR_MASK) != 0)) {
sc->dropped_pkts++;
++rxr->rx_discarded;
return (EBADMSG);
@ -524,7 +529,8 @@ igb_rx_checksum(uint32_t staterr, if_rxd_info_t ri, uint32_t ptype)
return;
/* If there is a layer 3 or 4 error we are done */
if (__predict_false(errors & (E1000_RXD_ERR_IPE | E1000_RXD_ERR_TCPE)))
if (__predict_false(errors &
(E1000_RXD_ERR_IPE | E1000_RXD_ERR_TCPE)))
return;
/* IP Checksum Good */
@ -535,11 +541,13 @@ igb_rx_checksum(uint32_t staterr, if_rxd_info_t ri, uint32_t ptype)
if (__predict_true(status &
(E1000_RXD_STAT_TCPCS | E1000_RXD_STAT_UDPCS))) {
/* SCTP header present */
if (__predict_false((ptype & E1000_RXDADV_PKTTYPE_ETQF) == 0 &&
if (__predict_false(
(ptype & E1000_RXDADV_PKTTYPE_ETQF) == 0 &&
(ptype & E1000_RXDADV_PKTTYPE_SCTP) != 0)) {
ri->iri_csum_flags |= CSUM_SCTP_VALID;
} else {
ri->iri_csum_flags |= CSUM_DATA_VALID | CSUM_PSEUDO_HDR;
ri->iri_csum_flags |=
CSUM_DATA_VALID | CSUM_PSEUDO_HDR;
ri->iri_csum_data = htons(0xffff);
}
}

View File

@ -158,7 +158,7 @@ SYSCTL_INT(_hw_ena, OID_AUTO, enable_9k_mbufs, CTLFLAG_RDTUN,
int ena_force_large_llq_header = ENA_LLQ_HEADER_SIZE_POLICY_DEFAULT;
SYSCTL_INT(_hw_ena, OID_AUTO, force_large_llq_header, CTLFLAG_RDTUN,
&ena_force_large_llq_header, 0,
"Change default LLQ entry size received from the device\n");
"Change default LLQ entry size received from the device");
int ena_rss_table_size = ENA_RX_RSS_TABLE_SIZE;

View File

@ -122,6 +122,7 @@ struct hms_softc {
hid_size_t isize;
uint32_t drift_cnt;
uint32_t drift_thresh;
struct hid_location wheel_loc;
#endif
};
@ -131,6 +132,7 @@ hms_intr(void *context, void *buf, hid_size_t len)
{
struct hidmap *hm = context;
struct hms_softc *sc = device_get_softc(hm->dev);
int32_t wheel;
if (len > sc->isize)
len = sc->isize;
@ -140,8 +142,18 @@ hms_intr(void *context, void *buf, hid_size_t len)
* to return last report data in sampling mode even after touch has
* been ended. That results in cursor drift. Filter out such a
* reports through comparing with previous one.
*
* Except this results in dropping consecutive mouse wheel events,
* because differently from cursor movement they always move by the
* same amount. So, don't do it when there's mouse wheel movement.
*/
if (len == sc->last_irsize && memcmp(buf, sc->last_ir, len) == 0) {
if (sc->wheel_loc.size != 0)
wheel = hid_get_data(buf, len, &sc->wheel_loc);
else
wheel = 0;
if (len == sc->last_irsize && memcmp(buf, sc->last_ir, len) == 0 &&
wheel == 0) {
sc->drift_cnt++;
if (sc->drift_thresh != 0 && sc->drift_cnt >= sc->drift_thresh)
return;
@ -285,9 +297,25 @@ hms_attach(device_t dev)
/* Count number of input usages of variable type mapped to buttons */
for (hi = sc->hm.hid_items;
hi < sc->hm.hid_items + sc->hm.nhid_items;
hi++)
hi++) {
if (hi->type == HIDMAP_TYPE_VARIABLE && hi->evtype == EV_KEY)
nbuttons++;
#ifdef IICHID_SAMPLING
/*
* Make note of which part of the report descriptor is the wheel.
*/
if (hi->type == HIDMAP_TYPE_VARIABLE &&
hi->evtype == EV_REL && hi->code == REL_WHEEL) {
sc->wheel_loc = hi->loc;
/*
* Account for the leading Report ID byte
* if it is a multi-report device.
*/
if (hi->id != 0)
sc->wheel_loc.pos += 8;
}
#endif
}
/* announce information about the mouse */
device_printf(dev, "%d buttons and [%s%s%s%s%s] coordinates ID=%u\n",

View File

@ -49,22 +49,38 @@
static const pci_vendor_info_t igc_vendor_info_array[] =
{
/* Intel(R) PRO/1000 Network Connection - igc */
PVID(0x8086, IGC_DEV_ID_I225_LM, "Intel(R) Ethernet Controller I225-LM"),
PVID(0x8086, IGC_DEV_ID_I225_V, "Intel(R) Ethernet Controller I225-V"),
PVID(0x8086, IGC_DEV_ID_I225_K, "Intel(R) Ethernet Controller I225-K"),
PVID(0x8086, IGC_DEV_ID_I225_I, "Intel(R) Ethernet Controller I225-I"),
PVID(0x8086, IGC_DEV_ID_I220_V, "Intel(R) Ethernet Controller I220-V"),
PVID(0x8086, IGC_DEV_ID_I225_K2, "Intel(R) Ethernet Controller I225-K(2)"),
PVID(0x8086, IGC_DEV_ID_I225_LMVP, "Intel(R) Ethernet Controller I225-LMvP(2)"),
PVID(0x8086, IGC_DEV_ID_I226_K, "Intel(R) Ethernet Controller I226-K"),
PVID(0x8086, IGC_DEV_ID_I226_LMVP, "Intel(R) Ethernet Controller I226-LMvP"),
PVID(0x8086, IGC_DEV_ID_I225_IT, "Intel(R) Ethernet Controller I225-IT(2)"),
PVID(0x8086, IGC_DEV_ID_I226_LM, "Intel(R) Ethernet Controller I226-LM"),
PVID(0x8086, IGC_DEV_ID_I226_V, "Intel(R) Ethernet Controller I226-V"),
PVID(0x8086, IGC_DEV_ID_I226_IT, "Intel(R) Ethernet Controller I226-IT"),
PVID(0x8086, IGC_DEV_ID_I221_V, "Intel(R) Ethernet Controller I221-V"),
PVID(0x8086, IGC_DEV_ID_I226_BLANK_NVM, "Intel(R) Ethernet Controller I226(blankNVM)"),
PVID(0x8086, IGC_DEV_ID_I225_BLANK_NVM, "Intel(R) Ethernet Controller I225(blankNVM)"),
PVID(0x8086, IGC_DEV_ID_I225_LM,
"Intel(R) Ethernet Controller I225-LM"),
PVID(0x8086, IGC_DEV_ID_I225_V,
"Intel(R) Ethernet Controller I225-V"),
PVID(0x8086, IGC_DEV_ID_I225_K,
"Intel(R) Ethernet Controller I225-K"),
PVID(0x8086, IGC_DEV_ID_I225_I,
"Intel(R) Ethernet Controller I225-I"),
PVID(0x8086, IGC_DEV_ID_I220_V,
"Intel(R) Ethernet Controller I220-V"),
PVID(0x8086, IGC_DEV_ID_I225_K2,
"Intel(R) Ethernet Controller I225-K(2)"),
PVID(0x8086, IGC_DEV_ID_I225_LMVP,
"Intel(R) Ethernet Controller I225-LMvP(2)"),
PVID(0x8086, IGC_DEV_ID_I226_K,
"Intel(R) Ethernet Controller I226-K"),
PVID(0x8086, IGC_DEV_ID_I226_LMVP,
"Intel(R) Ethernet Controller I226-LMvP"),
PVID(0x8086, IGC_DEV_ID_I225_IT,
"Intel(R) Ethernet Controller I225-IT(2)"),
PVID(0x8086, IGC_DEV_ID_I226_LM,
"Intel(R) Ethernet Controller I226-LM"),
PVID(0x8086, IGC_DEV_ID_I226_V,
"Intel(R) Ethernet Controller I226-V"),
PVID(0x8086, IGC_DEV_ID_I226_IT,
"Intel(R) Ethernet Controller I226-IT"),
PVID(0x8086, IGC_DEV_ID_I221_V,
"Intel(R) Ethernet Controller I221-V"),
PVID(0x8086, IGC_DEV_ID_I226_BLANK_NVM,
"Intel(R) Ethernet Controller I226(blankNVM)"),
PVID(0x8086, IGC_DEV_ID_I225_BLANK_NVM,
"Intel(R) Ethernet Controller I225(blankNVM)"),
/* required last entry */
PVID_END
};
@ -80,8 +96,10 @@ static int igc_if_shutdown(if_ctx_t);
static int igc_if_suspend(if_ctx_t);
static int igc_if_resume(if_ctx_t);
static int igc_if_tx_queues_alloc(if_ctx_t, caddr_t *, uint64_t *, int, int);
static int igc_if_rx_queues_alloc(if_ctx_t, caddr_t *, uint64_t *, int, int);
static int igc_if_tx_queues_alloc(if_ctx_t, caddr_t *, uint64_t *, int,
int);
static int igc_if_rx_queues_alloc(if_ctx_t, caddr_t *, uint64_t *, int,
int);
static void igc_if_queues_free(if_ctx_t);
static uint64_t igc_if_get_counter(if_ctx_t, ift_counter);
@ -229,11 +247,12 @@ SYSCTL_INT(_hw_igc, OID_AUTO, disable_crc_stripping, CTLFLAG_RDTUN,
&igc_disable_crc_stripping, 0, "Disable CRC Stripping");
static int igc_smart_pwr_down = false;
SYSCTL_INT(_hw_igc, OID_AUTO, smart_pwr_down, CTLFLAG_RDTUN, &igc_smart_pwr_down,
SYSCTL_INT(_hw_igc, OID_AUTO, smart_pwr_down, CTLFLAG_RDTUN,
&igc_smart_pwr_down,
0, "Set to true to leave smart power down enabled on newer adapters");
/* Controls whether promiscuous also shows bad packets */
static int igc_debug_sbp = true;
static int igc_debug_sbp = false;
SYSCTL_INT(_hw_igc, OID_AUTO, sbp, CTLFLAG_RDTUN, &igc_debug_sbp, 0,
"Show bad packets in promiscuous mode");
@ -277,7 +296,8 @@ static struct if_shared_ctx igc_sctx_init = {
.isc_vendor_info = igc_vendor_info_array,
.isc_driver_version = "1",
.isc_driver = &igc_if_driver,
.isc_flags = IFLIB_NEED_SCRATCH | IFLIB_TSO_INIT_IP | IFLIB_NEED_ZERO_CSUM,
.isc_flags =
IFLIB_NEED_SCRATCH | IFLIB_TSO_INIT_IP | IFLIB_NEED_ZERO_CSUM,
.isc_nrxd_min = {IGC_MIN_RXD},
.isc_ntxd_min = {IGC_MIN_TXD},
@ -383,15 +403,20 @@ static int igc_get_regs(SYSCTL_HANDLER_ARGS)
for (j = 0; j < nrxd; j++) {
u32 staterr = le32toh(rxr->rx_base[j].wb.upper.status_error);
u32 length = le32toh(rxr->rx_base[j].wb.upper.length);
sbuf_printf(sb, "\tReceive Descriptor Address %d: %08" PRIx64 " Error:%d Length:%d\n", j, rxr->rx_base[j].read.buffer_addr, staterr, length);
sbuf_printf(sb, "\tReceive Descriptor Address %d: %08"
PRIx64 " Error:%d Length:%d\n",
j, rxr->rx_base[j].read.buffer_addr, staterr, length);
}
for (j = 0; j < min(ntxd, 256); j++) {
unsigned int *ptr = (unsigned int *)&txr->tx_base[j];
sbuf_printf(sb, "\tTXD[%03d] [0]: %08x [1]: %08x [2]: %08x [3]: %08x eop: %d DD=%d\n",
j, ptr[0], ptr[1], ptr[2], ptr[3], buf->eop,
buf->eop != -1 ? txr->tx_base[buf->eop].upper.fields.status & IGC_TXD_STAT_DD : 0);
sbuf_printf(sb, "\tTXD[%03d] [0]: %08x [1]: %08x [2]: %08x"
"[3]: %08x eop: %d DD=%d\n",
j, ptr[0], ptr[1], ptr[2], ptr[3], buf->eop,
buf->eop != -1 ?
txr->tx_base[buf->eop].upper.fields.status &
IGC_TXD_STAT_DD : 0);
}
}
@ -523,13 +548,16 @@ igc_if_attach_pre(if_ctx_t ctx)
igc_identify_hardware(ctx);
scctx->isc_tx_nsegments = IGC_MAX_SCATTER;
scctx->isc_nrxqsets_max = scctx->isc_ntxqsets_max = igc_set_num_queues(ctx);
scctx->isc_nrxqsets_max =
scctx->isc_ntxqsets_max = igc_set_num_queues(ctx);
if (bootverbose)
device_printf(dev, "attach_pre capping queues at %d\n",
scctx->isc_ntxqsets_max);
scctx->isc_txqsizes[0] = roundup2(scctx->isc_ntxd[0] * sizeof(union igc_adv_tx_desc), IGC_DBA_ALIGN);
scctx->isc_rxqsizes[0] = roundup2(scctx->isc_nrxd[0] * sizeof(union igc_adv_rx_desc), IGC_DBA_ALIGN);
scctx->isc_txqsizes[0] = roundup2(scctx->isc_ntxd[0] *
sizeof(union igc_adv_tx_desc), IGC_DBA_ALIGN);
scctx->isc_rxqsizes[0] = roundup2(scctx->isc_nrxd[0] *
sizeof(union igc_adv_rx_desc), IGC_DBA_ALIGN);
scctx->isc_txd_size[0] = sizeof(union igc_adv_tx_desc);
scctx->isc_rxd_size[0] = sizeof(union igc_adv_rx_desc);
scctx->isc_txrx = &igc_txrx;
@ -588,7 +616,8 @@ igc_if_attach_pre(if_ctx_t ctx)
sc->mta = malloc(sizeof(u8) * ETHER_ADDR_LEN *
MAX_NUM_MULTICAST_ADDRESSES, M_DEVBUF, M_NOWAIT);
if (sc->mta == NULL) {
device_printf(dev, "Can not allocate multicast setup array\n");
device_printf(dev,
"Can not allocate multicast setup array\n");
error = ENOMEM;
goto err_late;
}
@ -632,7 +661,7 @@ igc_if_attach_pre(if_ctx_t ctx)
/* Copy the permanent MAC address out of the EEPROM */
if (igc_read_mac_addr(hw) < 0) {
device_printf(dev, "EEPROM read error while reading MAC"
" address\n");
" address\n");
error = EIO;
goto err_late;
}
@ -772,7 +801,7 @@ igc_if_mtu_set(if_ctx_t ctx, uint32_t mtu)
struct igc_softc *sc = iflib_get_softc(ctx);
if_softc_ctx_t scctx = iflib_get_softc_ctx(ctx);
IOCTL_DEBUGOUT("ioctl rcv'd: SIOCSIFMTU (Set Interface MTU)");
IOCTL_DEBUGOUT("ioctl rcv'd: SIOCSIFMTU (Set Interface MTU)");
/* 9K Jumbo Frame size */
max_frame_size = 9234;
@ -817,7 +846,8 @@ igc_if_init(if_ctx_t ctx)
igc_reset(ctx);
igc_if_update_admin_status(ctx);
for (i = 0, tx_que = sc->tx_queues; i < sc->tx_num_queues; i++, tx_que++) {
for (i = 0, tx_que = sc->tx_queues; i < sc->tx_num_queues;
i++, tx_que++) {
struct tx_ring *txr = &tx_que->txr;
txr->tx_rs_cidx = txr->tx_rs_pidx;
@ -906,7 +936,7 @@ igc_neweitr(struct igc_softc *sc, struct igc_rx_queue *que,
goto igc_set_next_eitr;
}
/* Get the largest values from the associated tx and rx ring */
/* Get largest values from the associated tx and rx ring */
if (txr->tx_bytes && txr->tx_packets) {
bytes = txr->tx_bytes;
bytes_packets = txr->tx_bytes/txr->tx_packets;
@ -914,7 +944,8 @@ igc_neweitr(struct igc_softc *sc, struct igc_rx_queue *que,
}
if (rxr->rx_bytes && rxr->rx_packets) {
bytes = max(bytes, rxr->rx_bytes);
bytes_packets = max(bytes_packets, rxr->rx_bytes/rxr->rx_packets);
bytes_packets = max(bytes_packets,
rxr->rx_bytes/rxr->rx_packets);
packets = max(packets, rxr->rx_packets);
}
@ -935,7 +966,8 @@ igc_neweitr(struct igc_softc *sc, struct igc_rx_queue *que,
/* Handle TSO */
if (bytes_packets > 8000)
nextlatency = eitr_latency_bulk;
else if ((packets < 10) || (bytes_packets > 1200))
else if ((packets < 10) ||
(bytes_packets > 1200))
nextlatency = eitr_latency_bulk;
else if (packets > 35)
nextlatency = eitr_latency_lowest;
@ -954,7 +986,8 @@ igc_neweitr(struct igc_softc *sc, struct igc_rx_queue *que,
break;
default:
nextlatency = eitr_latency_low;
device_printf(sc->dev, "Unexpected neweitr transition %d\n",
device_printf(sc->dev,
"Unexpected neweitr transition %d\n",
nextlatency);
break;
}
@ -1170,13 +1203,13 @@ igc_if_media_status(if_ctx_t ctx, struct ifmediareq *ifmr)
break;
case 100:
ifmr->ifm_active |= IFM_100_TX;
break;
break;
case 1000:
ifmr->ifm_active |= IFM_1000_T;
break;
case 2500:
ifmr->ifm_active |= IFM_2500_T;
break;
ifmr->ifm_active |= IFM_2500_T;
break;
}
if (sc->link_duplex == FULL_DUPLEX)
@ -1210,9 +1243,9 @@ igc_if_media_change(if_ctx_t ctx)
case IFM_AUTO:
sc->hw.phy.autoneg_advertised = AUTONEG_ADV_DEFAULT;
break;
case IFM_2500_T:
sc->hw.phy.autoneg_advertised = ADVERTISE_2500_FULL;
break;
case IFM_2500_T:
sc->hw.phy.autoneg_advertised = ADVERTISE_2500_FULL;
break;
case IFM_1000_T:
sc->hw.phy.autoneg_advertised = ADVERTISE_1000_FULL;
break;
@ -1296,7 +1329,7 @@ igc_if_multi_set(if_ctx_t ctx)
{
struct igc_softc *sc = iflib_get_softc(ctx);
if_t ifp = iflib_get_ifp(ctx);
u8 *mta; /* Multicast array memory */
u8 *mta; /* Multicast array memory */
u32 reg_rctl = 0;
int mcnt = 0;
@ -1315,10 +1348,10 @@ igc_if_multi_set(if_ctx_t ctx)
if (igc_debug_sbp)
reg_rctl |= IGC_RCTL_SBP;
} else if (mcnt >= MAX_NUM_MULTICAST_ADDRESSES ||
if_getflags(ifp) & IFF_ALLMULTI) {
reg_rctl |= IGC_RCTL_MPE;
if_getflags(ifp) & IFF_ALLMULTI) {
reg_rctl |= IGC_RCTL_MPE;
reg_rctl &= ~IGC_RCTL_UPE;
} else
} else
reg_rctl &= ~(IGC_RCTL_UPE | IGC_RCTL_MPE);
if (mcnt < MAX_NUM_MULTICAST_ADDRESSES)
@ -1463,7 +1496,8 @@ igc_allocate_pci_resources(if_ctx_t ctx)
sc->memory = bus_alloc_resource_any(dev, SYS_RES_MEMORY,
&rid, RF_ACTIVE);
if (sc->memory == NULL) {
device_printf(dev, "Unable to allocate bus resource: memory\n");
device_printf(dev,
"Unable to allocate bus resource: memory\n");
return (ENXIO);
}
sc->osdep.mem_bus_space_tag = rman_get_bustag(sc->memory);
@ -1494,9 +1528,12 @@ igc_if_msix_intr_assign(if_ctx_t ctx, int msix)
for (i = 0; i < sc->rx_num_queues; i++, rx_que++, vector++) {
rid = vector + 1;
snprintf(buf, sizeof(buf), "rxq%d", i);
error = iflib_irq_alloc_generic(ctx, &rx_que->que_irq, rid, IFLIB_INTR_RXTX, igc_msix_que, rx_que, rx_que->me, buf);
error = iflib_irq_alloc_generic(ctx, &rx_que->que_irq, rid,
IFLIB_INTR_RXTX, igc_msix_que, rx_que, rx_que->me, buf);
if (error) {
device_printf(iflib_get_dev(ctx), "Failed to allocate que int %d err: %d", i, error);
device_printf(iflib_get_dev(ctx),
"Failed to allocate que int %d err: %d",
i, error);
sc->rx_num_queues = i + 1;
goto fail;
}
@ -1534,10 +1571,12 @@ igc_if_msix_intr_assign(if_ctx_t ctx, int msix)
/* Link interrupt */
rid = rx_vectors + 1;
error = iflib_irq_alloc_generic(ctx, &sc->irq, rid, IFLIB_INTR_ADMIN, igc_msix_link, sc, 0, "aq");
error = iflib_irq_alloc_generic(ctx, &sc->irq, rid, IFLIB_INTR_ADMIN,
igc_msix_link, sc, 0, "aq");
if (error) {
device_printf(iflib_get_dev(ctx), "Failed to register admin handler");
device_printf(iflib_get_dev(ctx),
"Failed to register admin handler");
goto fail;
}
sc->linkvec = rx_vectors;
@ -1662,12 +1701,12 @@ igc_setup_msix(if_ctx_t ctx)
static void
igc_init_dmac(struct igc_softc *sc, u32 pba)
{
device_t dev = sc->dev;
device_t dev = sc->dev;
struct igc_hw *hw = &sc->hw;
u32 dmac, reg = ~IGC_DMACR_DMAC_EN;
u16 hwm;
u16 max_frame_size;
int status;
u32 dmac, reg = ~IGC_DMACR_DMAC_EN;
u16 hwm;
u16 max_frame_size;
int status;
max_frame_size = sc->shared->isc_max_frame_size;
@ -1777,7 +1816,8 @@ igc_reset(if_ctx_t ctx)
* response (Rx) to Ethernet PAUSE frames.
* - High water mark should allow for at least two frames to be
* received after sending an XOFF.
* - Low water mark works best when it is very near the high water mark.
* - Low water mark works best when it is very near the high water
* mark.
* This allows the receiver to restart by sending XON when it has
* drained a bit. Here we use an arbitrary value of 1500 which will
* restart after one full frame is pulled from the buffer. There
@ -1957,7 +1997,8 @@ igc_setup_interface(if_ctx_t ctx)
}
static int
igc_if_tx_queues_alloc(if_ctx_t ctx, caddr_t *vaddrs, uint64_t *paddrs, int ntxqs, int ntxqsets)
igc_if_tx_queues_alloc(if_ctx_t ctx, caddr_t *vaddrs, uint64_t *paddrs,
int ntxqs, int ntxqsets)
{
struct igc_softc *sc = iflib_get_softc(ctx);
if_softc_ctx_t scctx = sc->shared;
@ -1972,7 +2013,8 @@ igc_if_tx_queues_alloc(if_ctx_t ctx, caddr_t *vaddrs, uint64_t *paddrs, int ntxq
if (!(sc->tx_queues =
(struct igc_tx_queue *) malloc(sizeof(struct igc_tx_queue) *
sc->tx_num_queues, M_DEVBUF, M_NOWAIT | M_ZERO))) {
device_printf(iflib_get_dev(ctx), "Unable to allocate queue memory\n");
device_printf(iflib_get_dev(ctx),
"Unable to allocate queue memory\n");
return(ENOMEM);
}
@ -1984,14 +2026,16 @@ igc_if_tx_queues_alloc(if_ctx_t ctx, caddr_t *vaddrs, uint64_t *paddrs, int ntxq
que->me = txr->me = i;
/* Allocate report status array */
if (!(txr->tx_rsq = (qidx_t *) malloc(sizeof(qidx_t) * scctx->isc_ntxd[0], M_DEVBUF, M_NOWAIT | M_ZERO))) {
device_printf(iflib_get_dev(ctx), "failed to allocate rs_idxs memory\n");
if (!(txr->tx_rsq = (qidx_t *) malloc(sizeof(qidx_t) *
scctx->isc_ntxd[0], M_DEVBUF, M_NOWAIT | M_ZERO))) {
device_printf(iflib_get_dev(ctx),
"failed to allocate rs_idxs memory\n");
error = ENOMEM;
goto fail;
}
for (j = 0; j < scctx->isc_ntxd[0]; j++)
txr->tx_rsq[j] = QIDX_INVALID;
/* get the virtual and physical address of the hardware queues */
/* get virtual and physical address of the hardware queues */
txr->tx_base = (struct igc_tx_desc *)vaddrs[i*ntxqs];
txr->tx_paddr = paddrs[i*ntxqs];
}
@ -2006,7 +2050,8 @@ fail:
}
static int
igc_if_rx_queues_alloc(if_ctx_t ctx, caddr_t *vaddrs, uint64_t *paddrs, int nrxqs, int nrxqsets)
igc_if_rx_queues_alloc(if_ctx_t ctx, caddr_t *vaddrs, uint64_t *paddrs,
int nrxqs, int nrxqsets)
{
struct igc_softc *sc = iflib_get_softc(ctx);
int error = IGC_SUCCESS;
@ -2020,7 +2065,8 @@ igc_if_rx_queues_alloc(if_ctx_t ctx, caddr_t *vaddrs, uint64_t *paddrs, int nrxq
if (!(sc->rx_queues =
(struct igc_rx_queue *) malloc(sizeof(struct igc_rx_queue) *
sc->rx_num_queues, M_DEVBUF, M_NOWAIT | M_ZERO))) {
device_printf(iflib_get_dev(ctx), "Unable to allocate queue memory\n");
device_printf(iflib_get_dev(ctx),
"Unable to allocate queue memory\n");
error = ENOMEM;
goto fail;
}
@ -2032,7 +2078,7 @@ igc_if_rx_queues_alloc(if_ctx_t ctx, caddr_t *vaddrs, uint64_t *paddrs, int nrxq
rxr->que = que;
que->me = rxr->me = i;
/* get the virtual and physical address of the hardware queues */
/* get virtual and physical address of the hardware queues */
rxr->rx_base = (union igc_rx_desc_extended *)vaddrs[i*nrxqs];
rxr->rx_paddr = paddrs[i*nrxqs];
}
@ -2137,7 +2183,7 @@ igc_initialize_transmit_unit(if_ctx_t ctx)
tctl = IGC_READ_REG(&sc->hw, IGC_TCTL);
tctl &= ~IGC_TCTL_CT;
tctl |= (IGC_TCTL_PSP | IGC_TCTL_RTLC | IGC_TCTL_EN |
(IGC_COLLISION_THRESHOLD << IGC_CT_SHIFT));
(IGC_COLLISION_THRESHOLD << IGC_CT_SHIFT));
/* This write will effectively turn on the transmit unit. */
IGC_WRITE_REG(&sc->hw, IGC_TCTL, tctl);
@ -2247,11 +2293,9 @@ igc_initialize_receive_unit(if_ctx_t ctx)
#endif
IGC_WRITE_REG(hw, IGC_RDLEN(i),
scctx->isc_nrxd[0] * sizeof(struct igc_rx_desc));
IGC_WRITE_REG(hw, IGC_RDBAH(i),
(uint32_t)(bus_addr >> 32));
IGC_WRITE_REG(hw, IGC_RDBAL(i),
(uint32_t)bus_addr);
scctx->isc_nrxd[0] * sizeof(struct igc_rx_desc));
IGC_WRITE_REG(hw, IGC_RDBAH(i), (uint32_t)(bus_addr >> 32));
IGC_WRITE_REG(hw, IGC_RDBAL(i), (uint32_t)bus_addr);
IGC_WRITE_REG(hw, IGC_SRRCTL(i), srrctl);
/* Setup the Head and Tail Descriptor Pointers */
IGC_WRITE_REG(hw, IGC_RDH(i), 0);
@ -2648,17 +2692,17 @@ igc_add_hw_stats(struct igc_softc *sc)
/* Driver Statistics */
SYSCTL_ADD_ULONG(ctx, child, OID_AUTO, "dropped",
CTLFLAG_RD, &sc->dropped_pkts,
"Driver dropped packets");
CTLFLAG_RD, &sc->dropped_pkts,
"Driver dropped packets");
SYSCTL_ADD_ULONG(ctx, child, OID_AUTO, "link_irq",
CTLFLAG_RD, &sc->link_irq,
"Link MSI-X IRQ Handled");
CTLFLAG_RD, &sc->link_irq,
"Link MSI-X IRQ Handled");
SYSCTL_ADD_ULONG(ctx, child, OID_AUTO, "rx_overruns",
CTLFLAG_RD, &sc->rx_overruns,
"RX overruns");
CTLFLAG_RD, &sc->rx_overruns,
"RX overruns");
SYSCTL_ADD_ULONG(ctx, child, OID_AUTO, "watchdog_timeouts",
CTLFLAG_RD, &sc->watchdog_events,
"Watchdog timeouts");
CTLFLAG_RD, &sc->watchdog_events,
"Watchdog timeouts");
SYSCTL_ADD_PROC(ctx, child, OID_AUTO, "device_control",
CTLTYPE_UINT | CTLFLAG_RD | CTLFLAG_NEEDGIANT,
sc, IGC_CTRL, igc_sysctl_reg_handler, "IU",
@ -2668,11 +2712,11 @@ igc_add_hw_stats(struct igc_softc *sc)
sc, IGC_RCTL, igc_sysctl_reg_handler, "IU",
"Receiver Control Register");
SYSCTL_ADD_UINT(ctx, child, OID_AUTO, "fc_high_water",
CTLFLAG_RD, &sc->hw.fc.high_water, 0,
"Flow Control High Watermark");
CTLFLAG_RD, &sc->hw.fc.high_water, 0,
"Flow Control High Watermark");
SYSCTL_ADD_UINT(ctx, child, OID_AUTO, "fc_low_water",
CTLFLAG_RD, &sc->hw.fc.low_water, 0,
"Flow Control Low Watermark");
CTLFLAG_RD, &sc->hw.fc.low_water, 0,
"Flow Control Low Watermark");
for (int i = 0; i < sc->tx_num_queues; i++, tx_que++) {
struct tx_ring *txr = &tx_que->txr;
@ -2694,8 +2738,8 @@ igc_add_hw_stats(struct igc_softc *sc)
IGC_TDT(txr->me), igc_sysctl_reg_handler, "IU",
"Transmit Descriptor Tail");
SYSCTL_ADD_ULONG(ctx, queue_list, OID_AUTO, "tx_irq",
CTLFLAG_RD, &txr->tx_irq,
"Queue MSI-X Transmit Interrupts");
CTLFLAG_RD, &txr->tx_irq,
"Queue MSI-X Transmit Interrupts");
}
for (int j = 0; j < sc->rx_num_queues; j++, rx_que++) {
@ -2707,8 +2751,8 @@ igc_add_hw_stats(struct igc_softc *sc)
SYSCTL_ADD_PROC(ctx, queue_list, OID_AUTO, "interrupt_rate",
CTLTYPE_UINT | CTLFLAG_RD, rx_que,
false, igc_sysctl_interrupt_rate_handler, "IU",
"Interrupt Rate");
false, igc_sysctl_interrupt_rate_handler, "IU",
"Interrupt Rate");
SYSCTL_ADD_PROC(ctx, queue_list, OID_AUTO, "rxd_head",
CTLTYPE_UINT | CTLFLAG_RD | CTLFLAG_NEEDGIANT, sc,
IGC_RDH(rxr->me), igc_sysctl_reg_handler, "IU",
@ -2718,181 +2762,179 @@ igc_add_hw_stats(struct igc_softc *sc)
IGC_RDT(rxr->me), igc_sysctl_reg_handler, "IU",
"Receive Descriptor Tail");
SYSCTL_ADD_ULONG(ctx, queue_list, OID_AUTO, "rx_irq",
CTLFLAG_RD, &rxr->rx_irq,
"Queue MSI-X Receive Interrupts");
CTLFLAG_RD, &rxr->rx_irq,
"Queue MSI-X Receive Interrupts");
}
/* MAC stats get their own sub node */
stat_node = SYSCTL_ADD_NODE(ctx, child, OID_AUTO, "mac_stats",
CTLFLAG_RD | CTLFLAG_MPSAFE, NULL, "Statistics");
stat_list = SYSCTL_CHILDREN(stat_node);
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "excess_coll",
CTLFLAG_RD, &stats->ecol,
"Excessive collisions");
CTLFLAG_RD, &stats->ecol,
"Excessive collisions");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "single_coll",
CTLFLAG_RD, &stats->scc,
"Single collisions");
CTLFLAG_RD, &stats->scc,
"Single collisions");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "multiple_coll",
CTLFLAG_RD, &stats->mcc,
"Multiple collisions");
CTLFLAG_RD, &stats->mcc,
"Multiple collisions");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "late_coll",
CTLFLAG_RD, &stats->latecol,
"Late collisions");
CTLFLAG_RD, &stats->latecol,
"Late collisions");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "collision_count",
CTLFLAG_RD, &stats->colc,
"Collision Count");
CTLFLAG_RD, &stats->colc,
"Collision Count");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "symbol_errors",
CTLFLAG_RD, &sc->stats.symerrs,
"Symbol Errors");
CTLFLAG_RD, &sc->stats.symerrs,
"Symbol Errors");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "sequence_errors",
CTLFLAG_RD, &sc->stats.sec,
"Sequence Errors");
CTLFLAG_RD, &sc->stats.sec,
"Sequence Errors");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "defer_count",
CTLFLAG_RD, &sc->stats.dc,
"Defer Count");
CTLFLAG_RD, &sc->stats.dc,
"Defer Count");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "missed_packets",
CTLFLAG_RD, &sc->stats.mpc,
"Missed Packets");
CTLFLAG_RD, &sc->stats.mpc,
"Missed Packets");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "recv_length_errors",
CTLFLAG_RD, &sc->stats.rlec,
"Receive Length Errors");
CTLFLAG_RD, &sc->stats.rlec,
"Receive Length Errors");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "recv_no_buff",
CTLFLAG_RD, &sc->stats.rnbc,
"Receive No Buffers");
CTLFLAG_RD, &sc->stats.rnbc,
"Receive No Buffers");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "recv_undersize",
CTLFLAG_RD, &sc->stats.ruc,
"Receive Undersize");
CTLFLAG_RD, &sc->stats.ruc,
"Receive Undersize");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "recv_fragmented",
CTLFLAG_RD, &sc->stats.rfc,
"Fragmented Packets Received ");
CTLFLAG_RD, &sc->stats.rfc,
"Fragmented Packets Received ");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "recv_oversize",
CTLFLAG_RD, &sc->stats.roc,
"Oversized Packets Received");
CTLFLAG_RD, &sc->stats.roc,
"Oversized Packets Received");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "recv_jabber",
CTLFLAG_RD, &sc->stats.rjc,
"Recevied Jabber");
CTLFLAG_RD, &sc->stats.rjc,
"Recevied Jabber");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "recv_errs",
CTLFLAG_RD, &sc->stats.rxerrc,
"Receive Errors");
CTLFLAG_RD, &sc->stats.rxerrc,
"Receive Errors");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "crc_errs",
CTLFLAG_RD, &sc->stats.crcerrs,
"CRC errors");
CTLFLAG_RD, &sc->stats.crcerrs,
"CRC errors");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "alignment_errs",
CTLFLAG_RD, &sc->stats.algnerrc,
"Alignment Errors");
CTLFLAG_RD, &sc->stats.algnerrc,
"Alignment Errors");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "xon_recvd",
CTLFLAG_RD, &sc->stats.xonrxc,
"XON Received");
CTLFLAG_RD, &sc->stats.xonrxc,
"XON Received");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "xon_txd",
CTLFLAG_RD, &sc->stats.xontxc,
"XON Transmitted");
CTLFLAG_RD, &sc->stats.xontxc,
"XON Transmitted");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "xoff_recvd",
CTLFLAG_RD, &sc->stats.xoffrxc,
"XOFF Received");
CTLFLAG_RD, &sc->stats.xoffrxc,
"XOFF Received");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "xoff_txd",
CTLFLAG_RD, &sc->stats.xofftxc,
"XOFF Transmitted");
CTLFLAG_RD, &sc->stats.xofftxc,
"XOFF Transmitted");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "unsupported_fc_recvd",
CTLFLAG_RD, &sc->stats.fcruc,
"Unsupported Flow Control Received");
CTLFLAG_RD, &sc->stats.fcruc,
"Unsupported Flow Control Received");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "mgmt_pkts_recvd",
CTLFLAG_RD, &sc->stats.mgprc,
"Management Packets Received");
CTLFLAG_RD, &sc->stats.mgprc,
"Management Packets Received");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "mgmt_pkts_drop",
CTLFLAG_RD, &sc->stats.mgpdc,
"Management Packets Dropped");
CTLFLAG_RD, &sc->stats.mgpdc,
"Management Packets Dropped");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "mgmt_pkts_txd",
CTLFLAG_RD, &sc->stats.mgptc,
"Management Packets Transmitted");
CTLFLAG_RD, &sc->stats.mgptc,
"Management Packets Transmitted");
/* Packet Reception Stats */
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "total_pkts_recvd",
CTLFLAG_RD, &sc->stats.tpr,
"Total Packets Received ");
CTLFLAG_RD, &sc->stats.tpr,
"Total Packets Received ");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "good_pkts_recvd",
CTLFLAG_RD, &sc->stats.gprc,
"Good Packets Received");
CTLFLAG_RD, &sc->stats.gprc,
"Good Packets Received");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "bcast_pkts_recvd",
CTLFLAG_RD, &sc->stats.bprc,
"Broadcast Packets Received");
CTLFLAG_RD, &sc->stats.bprc,
"Broadcast Packets Received");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "mcast_pkts_recvd",
CTLFLAG_RD, &sc->stats.mprc,
"Multicast Packets Received");
CTLFLAG_RD, &sc->stats.mprc,
"Multicast Packets Received");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "rx_frames_64",
CTLFLAG_RD, &sc->stats.prc64,
"64 byte frames received ");
CTLFLAG_RD, &sc->stats.prc64,
"64 byte frames received ");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "rx_frames_65_127",
CTLFLAG_RD, &sc->stats.prc127,
"65-127 byte frames received");
CTLFLAG_RD, &sc->stats.prc127,
"65-127 byte frames received");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "rx_frames_128_255",
CTLFLAG_RD, &sc->stats.prc255,
"128-255 byte frames received");
CTLFLAG_RD, &sc->stats.prc255,
"128-255 byte frames received");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "rx_frames_256_511",
CTLFLAG_RD, &sc->stats.prc511,
"256-511 byte frames received");
CTLFLAG_RD, &sc->stats.prc511,
"256-511 byte frames received");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "rx_frames_512_1023",
CTLFLAG_RD, &sc->stats.prc1023,
"512-1023 byte frames received");
CTLFLAG_RD, &sc->stats.prc1023,
"512-1023 byte frames received");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "rx_frames_1024_1522",
CTLFLAG_RD, &sc->stats.prc1522,
"1023-1522 byte frames received");
CTLFLAG_RD, &sc->stats.prc1522,
"1023-1522 byte frames received");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "good_octets_recvd",
CTLFLAG_RD, &sc->stats.gorc,
"Good Octets Received");
CTLFLAG_RD, &sc->stats.gorc,
"Good Octets Received");
/* Packet Transmission Stats */
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "good_octets_txd",
CTLFLAG_RD, &sc->stats.gotc,
"Good Octets Transmitted");
CTLFLAG_RD, &sc->stats.gotc,
"Good Octets Transmitted");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "total_pkts_txd",
CTLFLAG_RD, &sc->stats.tpt,
"Total Packets Transmitted");
CTLFLAG_RD, &sc->stats.tpt,
"Total Packets Transmitted");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "good_pkts_txd",
CTLFLAG_RD, &sc->stats.gptc,
"Good Packets Transmitted");
CTLFLAG_RD, &sc->stats.gptc,
"Good Packets Transmitted");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "bcast_pkts_txd",
CTLFLAG_RD, &sc->stats.bptc,
"Broadcast Packets Transmitted");
CTLFLAG_RD, &sc->stats.bptc,
"Broadcast Packets Transmitted");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "mcast_pkts_txd",
CTLFLAG_RD, &sc->stats.mptc,
"Multicast Packets Transmitted");
CTLFLAG_RD, &sc->stats.mptc,
"Multicast Packets Transmitted");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "tx_frames_64",
CTLFLAG_RD, &sc->stats.ptc64,
"64 byte frames transmitted ");
CTLFLAG_RD, &sc->stats.ptc64,
"64 byte frames transmitted ");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "tx_frames_65_127",
CTLFLAG_RD, &sc->stats.ptc127,
"65-127 byte frames transmitted");
CTLFLAG_RD, &sc->stats.ptc127,
"65-127 byte frames transmitted");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "tx_frames_128_255",
CTLFLAG_RD, &sc->stats.ptc255,
"128-255 byte frames transmitted");
CTLFLAG_RD, &sc->stats.ptc255,
"128-255 byte frames transmitted");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "tx_frames_256_511",
CTLFLAG_RD, &sc->stats.ptc511,
"256-511 byte frames transmitted");
CTLFLAG_RD, &sc->stats.ptc511,
"256-511 byte frames transmitted");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "tx_frames_512_1023",
CTLFLAG_RD, &sc->stats.ptc1023,
"512-1023 byte frames transmitted");
CTLFLAG_RD, &sc->stats.ptc1023,
"512-1023 byte frames transmitted");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "tx_frames_1024_1522",
CTLFLAG_RD, &sc->stats.ptc1522,
"1024-1522 byte frames transmitted");
CTLFLAG_RD, &sc->stats.ptc1522,
"1024-1522 byte frames transmitted");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "tso_txd",
CTLFLAG_RD, &sc->stats.tsctc,
"TSO Contexts Transmitted");
CTLFLAG_RD, &sc->stats.tsctc,
"TSO Contexts Transmitted");
/* Interrupt Stats */
int_node = SYSCTL_ADD_NODE(ctx, child, OID_AUTO, "interrupts",
CTLFLAG_RD | CTLFLAG_MPSAFE, NULL, "Interrupt Statistics");
int_list = SYSCTL_CHILDREN(int_node);
SYSCTL_ADD_UQUAD(ctx, int_list, OID_AUTO, "asserts",
CTLFLAG_RD, &sc->stats.iac,
"Interrupt Assertion Count");
CTLFLAG_RD, &sc->stats.iac,
"Interrupt Assertion Count");
SYSCTL_ADD_UQUAD(ctx, int_list, OID_AUTO, "rx_desc_min_thresh",
CTLFLAG_RD, &sc->stats.rxdmtc,
"Rx Desc Min Thresh Count");
CTLFLAG_RD, &sc->stats.rxdmtc,
"Rx Desc Min Thresh Count");
}
static void
@ -2913,21 +2955,22 @@ igc_sbuf_fw_version(struct igc_fw_version *fw_ver, struct sbuf *buf)
if (fw_ver->eep_major || fw_ver->eep_minor || fw_ver->eep_build) {
sbuf_printf(buf, "EEPROM V%d.%d-%d", fw_ver->eep_major,
fw_ver->eep_minor, fw_ver->eep_build);
fw_ver->eep_minor, fw_ver->eep_build);
space = " ";
}
if (fw_ver->invm_major || fw_ver->invm_minor || fw_ver->invm_img_type) {
if (fw_ver->invm_major || fw_ver->invm_minor ||
fw_ver->invm_img_type) {
sbuf_printf(buf, "%sNVM V%d.%d imgtype%d",
space, fw_ver->invm_major, fw_ver->invm_minor,
fw_ver->invm_img_type);
space, fw_ver->invm_major, fw_ver->invm_minor,
fw_ver->invm_img_type);
space = " ";
}
if (fw_ver->or_valid) {
sbuf_printf(buf, "%sOption ROM V%d-b%d-p%d",
space, fw_ver->or_major, fw_ver->or_build,
fw_ver->or_patch);
space, fw_ver->or_major, fw_ver->or_build,
fw_ver->or_patch);
space = " ";
}
@ -3085,7 +3128,7 @@ igc_set_flowcntl(SYSCTL_HANDLER_ARGS)
{
int error;
static int input = 3; /* default is full */
struct igc_softc *sc = (struct igc_softc *) arg1;
struct igc_softc *sc = (struct igc_softc *) arg1;
error = sysctl_handle_int(oidp, &input, 0, req);
@ -3253,14 +3296,14 @@ igc_print_debug_info(struct igc_softc *sc)
for (int i = 0; i < sc->tx_num_queues; i++, txr++) {
device_printf(dev, "TX Queue %d ------\n", i);
device_printf(dev, "hw tdh = %d, hw tdt = %d\n",
IGC_READ_REG(&sc->hw, IGC_TDH(i)),
IGC_READ_REG(&sc->hw, IGC_TDT(i)));
IGC_READ_REG(&sc->hw, IGC_TDH(i)),
IGC_READ_REG(&sc->hw, IGC_TDT(i)));
}
for (int j=0; j < sc->rx_num_queues; j++, rxr++) {
device_printf(dev, "RX Queue %d ------\n", j);
device_printf(dev, "hw rdh = %d, hw rdt = %d\n",
IGC_READ_REG(&sc->hw, IGC_RDH(j)),
IGC_READ_REG(&sc->hw, IGC_RDT(j)));
IGC_READ_REG(&sc->hw, IGC_RDH(j)),
IGC_READ_REG(&sc->hw, IGC_RDT(j)));
}
}

View File

@ -97,15 +97,19 @@ igc_dump_rs(struct igc_softc *sc)
cur = txr->tx_rsq[rs_cidx];
status = txr->tx_base[cur].upper.fields.status;
if (!(status & IGC_TXD_STAT_DD))
printf("qid[%d]->tx_rsq[%d]: %d clear ", qid, rs_cidx, cur);
printf("qid[%d]->tx_rsq[%d]: %d clear ",
qid, rs_cidx, cur);
} else {
rs_cidx = (rs_cidx-1)&(ntxd-1);
cur = txr->tx_rsq[rs_cidx];
printf("qid[%d]->tx_rsq[rs_cidx-1=%d]: %d ", qid, rs_cidx, cur);
printf("qid[%d]->tx_rsq[rs_cidx-1=%d]: %d ",
qid, rs_cidx, cur);
}
printf("cidx_prev=%d rs_pidx=%d ",txr->tx_cidx_processed, txr->tx_rs_pidx);
printf("cidx_prev=%d rs_pidx=%d ",txr->tx_cidx_processed,
txr->tx_rs_pidx);
for (i = 0; i < ntxd; i++) {
if (txr->tx_base[i].upper.fields.status & IGC_TXD_STAT_DD)
if (txr->tx_base[i].upper.fields.status &
IGC_TXD_STAT_DD)
printf("%d set ", i);
}
printf("\n");
@ -138,14 +142,15 @@ igc_tso_setup(struct tx_ring *txr, if_pkt_info_t pi, uint32_t *cmd_type_len,
break;
default:
panic("%s: CSUM_TSO but no supported IP version (0x%04x)",
__func__, ntohs(pi->ipi_etype));
__func__, ntohs(pi->ipi_etype));
break;
}
TXD = (struct igc_adv_tx_context_desc *) &txr->tx_base[pi->ipi_pidx];
/* This is used in the transmit desc in encap */
paylen = pi->ipi_len - pi->ipi_ehdrlen - pi->ipi_ip_hlen - pi->ipi_tcp_hlen;
paylen = pi->ipi_len - pi->ipi_ehdrlen - pi->ipi_ip_hlen -
pi->ipi_tcp_hlen;
/* VLAN MACLEN IPLEN */
if (pi->ipi_mflags & M_VLANTAG) {
@ -180,8 +185,8 @@ igc_tso_setup(struct tx_ring *txr, if_pkt_info_t pi, uint32_t *cmd_type_len,
*
**********************************************************************/
static int
igc_tx_ctx_setup(struct tx_ring *txr, if_pkt_info_t pi, uint32_t *cmd_type_len,
uint32_t *olinfo_status)
igc_tx_ctx_setup(struct tx_ring *txr, if_pkt_info_t pi,
uint32_t *cmd_type_len, uint32_t *olinfo_status)
{
struct igc_adv_tx_context_desc *TXD;
uint32_t vlan_macip_lens, type_tucmd_mlhl;
@ -275,7 +280,7 @@ igc_isc_txd_encap(void *arg, if_pkt_info_t pi)
pidx_last = olinfo_status = 0;
/* Basic descriptor defines */
cmd_type_len = (IGC_ADVTXD_DTYP_DATA |
IGC_ADVTXD_DCMD_IFCS | IGC_ADVTXD_DCMD_DEXT);
IGC_ADVTXD_DCMD_IFCS | IGC_ADVTXD_DCMD_DEXT);
if (pi->ipi_mflags & M_VLANTAG)
cmd_type_len |= IGC_ADVTXD_DCMD_VLE;
@ -324,9 +329,9 @@ igc_isc_txd_encap(void *arg, if_pkt_info_t pi)
static void
igc_isc_txd_flush(void *arg, uint16_t txqid, qidx_t pidx)
{
struct igc_softc *sc = arg;
struct igc_tx_queue *que = &sc->tx_queues[txqid];
struct tx_ring *txr = &que->txr;
struct igc_softc *sc = arg;
struct igc_tx_queue *que = &sc->tx_queues[txqid];
struct tx_ring *txr = &que->txr;
IGC_WRITE_REG(&sc->hw, IGC_TDT(txr->me), pidx);
}
@ -370,12 +375,13 @@ igc_isc_txd_credits_update(void *arg, uint16_t txqid, bool clear)
MPASS(delta > 0);
processed += delta;
prev = cur;
prev = cur;
rs_cidx = (rs_cidx + 1) & (ntxd-1);
if (rs_cidx == txr->tx_rs_pidx)
if (rs_cidx == txr->tx_rs_pidx)
break;
cur = txr->tx_rsq[rs_cidx];
status = ((union igc_adv_tx_desc *)&txr->tx_base[cur])->wb.status;
status =
((union igc_adv_tx_desc *)&txr->tx_base[cur])->wb.status;
} while ((status & IGC_TXD_STAT_DD));
txr->tx_rs_cidx = rs_cidx;
@ -411,7 +417,8 @@ igc_isc_rxd_refill(void *arg, if_rxd_update_t iru)
}
static void
igc_isc_rxd_flush(void *arg, uint16_t rxqid, uint8_t flid __unused, qidx_t pidx)
igc_isc_rxd_flush(void *arg, uint16_t rxqid, uint8_t flid __unused,
qidx_t pidx)
{
struct igc_softc *sc = arg;
struct igc_rx_queue *que = &sc->rx_queues[rxqid];
@ -477,7 +484,8 @@ igc_isc_rxd_pkt_get(void *arg, if_rxd_info_t ri)
MPASS ((staterr & IGC_RXD_STAT_DD) != 0);
len = le16toh(rxd->wb.upper.length);
ptype = le32toh(rxd->wb.lower.lo_dword.data) & IGC_PKTTYPE_MASK;
ptype =
le32toh(rxd->wb.lower.lo_dword.data) & IGC_PKTTYPE_MASK;
ri->iri_len += len;
rxr->rx_bytes += ri->iri_len;
@ -558,7 +566,8 @@ igc_rx_checksum(uint32_t staterr, if_rxd_info_t ri, uint32_t ptype)
(ptype & IGC_RXDADV_PKTTYPE_SCTP) != 0)) {
ri->iri_csum_flags |= CSUM_SCTP_VALID;
} else {
ri->iri_csum_flags |= CSUM_DATA_VALID | CSUM_PSEUDO_HDR;
ri->iri_csum_flags |=
CSUM_DATA_VALID | CSUM_PSEUDO_HDR;
ri->iri_csum_data = htons(0xffff);
}
}

View File

@ -1,4 +1,4 @@
/******************************************************************************
/*****************************************************************************
Copyright (c) 2001-2017, Intel Corporation
All rights reserved.
@ -29,7 +29,7 @@
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
******************************************************************************/
*****************************************************************************/
#include "ixgbe.h"
@ -114,11 +114,11 @@ ixgbe_get_bypass_time(u32 *year, u32 *sec)
static int
ixgbe_bp_version(SYSCTL_HANDLER_ARGS)
{
struct ixgbe_softc *sc = (struct ixgbe_softc *) arg1;
struct ixgbe_softc *sc = (struct ixgbe_softc *) arg1;
struct ixgbe_hw *hw = &sc->hw;
int error = 0;
static int version = 0;
u32 cmd;
int error = 0;
static int version = 0;
u32 cmd;
ixgbe_bypass_mutex_enter(sc);
cmd = BYPASS_PAGE_CTL2 | BYPASS_WE;
@ -154,15 +154,14 @@ err:
static int
ixgbe_bp_set_state(SYSCTL_HANDLER_ARGS)
{
struct ixgbe_softc *sc = (struct ixgbe_softc *) arg1;
struct ixgbe_softc *sc = (struct ixgbe_softc *) arg1;
struct ixgbe_hw *hw = &sc->hw;
int error = 0;
static int state = 0;
int error = 0;
static int state = 0;
/* Get the current state */
ixgbe_bypass_mutex_enter(sc);
error = hw->mac.ops.bypass_rw(hw,
BYPASS_PAGE_CTL0, &state);
error = hw->mac.ops.bypass_rw(hw, BYPASS_PAGE_CTL0, &state);
ixgbe_bypass_mutex_clear(sc);
if (error != 0)
return (error);
@ -216,10 +215,10 @@ out:
static int
ixgbe_bp_timeout(SYSCTL_HANDLER_ARGS)
{
struct ixgbe_softc *sc = (struct ixgbe_softc *) arg1;
struct ixgbe_softc *sc = (struct ixgbe_softc *) arg1;
struct ixgbe_hw *hw = &sc->hw;
int error = 0;
static int timeout = 0;
int error = 0;
static int timeout = 0;
/* Get the current value */
ixgbe_bypass_mutex_enter(sc);
@ -259,10 +258,10 @@ ixgbe_bp_timeout(SYSCTL_HANDLER_ARGS)
static int
ixgbe_bp_main_on(SYSCTL_HANDLER_ARGS)
{
struct ixgbe_softc *sc = (struct ixgbe_softc *) arg1;
struct ixgbe_softc *sc = (struct ixgbe_softc *) arg1;
struct ixgbe_hw *hw = &sc->hw;
int error = 0;
static int main_on = 0;
int error = 0;
static int main_on = 0;
ixgbe_bypass_mutex_enter(sc);
error = hw->mac.ops.bypass_rw(hw, BYPASS_PAGE_CTL0, &main_on);
@ -301,10 +300,10 @@ ixgbe_bp_main_on(SYSCTL_HANDLER_ARGS)
static int
ixgbe_bp_main_off(SYSCTL_HANDLER_ARGS)
{
struct ixgbe_softc *sc = (struct ixgbe_softc *) arg1;
struct ixgbe_softc *sc = (struct ixgbe_softc *) arg1;
struct ixgbe_hw *hw = &sc->hw;
int error = 0;
static int main_off = 0;
int error = 0;
static int main_off = 0;
ixgbe_bypass_mutex_enter(sc);
error = hw->mac.ops.bypass_rw(hw, BYPASS_PAGE_CTL0, &main_off);
@ -343,10 +342,10 @@ ixgbe_bp_main_off(SYSCTL_HANDLER_ARGS)
static int
ixgbe_bp_aux_on(SYSCTL_HANDLER_ARGS)
{
struct ixgbe_softc *sc = (struct ixgbe_softc *) arg1;
struct ixgbe_softc *sc = (struct ixgbe_softc *) arg1;
struct ixgbe_hw *hw = &sc->hw;
int error = 0;
static int aux_on = 0;
int error = 0;
static int aux_on = 0;
ixgbe_bypass_mutex_enter(sc);
error = hw->mac.ops.bypass_rw(hw, BYPASS_PAGE_CTL0, &aux_on);
@ -385,10 +384,10 @@ ixgbe_bp_aux_on(SYSCTL_HANDLER_ARGS)
static int
ixgbe_bp_aux_off(SYSCTL_HANDLER_ARGS)
{
struct ixgbe_softc *sc = (struct ixgbe_softc *) arg1;
struct ixgbe_softc *sc = (struct ixgbe_softc *) arg1;
struct ixgbe_hw *hw = &sc->hw;
int error = 0;
static int aux_off = 0;
int error = 0;
static int aux_off = 0;
ixgbe_bypass_mutex_enter(sc);
error = hw->mac.ops.bypass_rw(hw, BYPASS_PAGE_CTL0, &aux_off);
@ -432,11 +431,11 @@ ixgbe_bp_aux_off(SYSCTL_HANDLER_ARGS)
static int
ixgbe_bp_wd_set(SYSCTL_HANDLER_ARGS)
{
struct ixgbe_softc *sc = (struct ixgbe_softc *) arg1;
struct ixgbe_softc *sc = (struct ixgbe_softc *) arg1;
struct ixgbe_hw *hw = &sc->hw;
int error, tmp;
static int timeout = 0;
u32 mask, arg;
int error, tmp;
static int timeout = 0;
u32 mask, arg;
/* Get the current hardware value */
ixgbe_bypass_mutex_enter(sc);
@ -503,11 +502,11 @@ ixgbe_bp_wd_set(SYSCTL_HANDLER_ARGS)
static int
ixgbe_bp_wd_reset(SYSCTL_HANDLER_ARGS)
{
struct ixgbe_softc *sc = (struct ixgbe_softc *) arg1;
struct ixgbe_softc *sc = (struct ixgbe_softc *) arg1;
struct ixgbe_hw *hw = &sc->hw;
u32 sec, year;
int cmd, count = 0, error = 0;
int reset_wd = 0;
u32 sec, year;
int cmd, count = 0, error = 0;
int reset_wd = 0;
error = sysctl_handle_int(oidp, &reset_wd, 0, req);
if ((error) || (req->newptr == NULL))
@ -549,14 +548,14 @@ ixgbe_bp_wd_reset(SYSCTL_HANDLER_ARGS)
static int
ixgbe_bp_log(SYSCTL_HANDLER_ARGS)
{
struct ixgbe_softc *sc = (struct ixgbe_softc *) arg1;
struct ixgbe_hw *hw = &sc->hw;
u32 cmd, base, head;
u32 log_off, count = 0;
static int status = 0;
u8 data;
struct ixgbe_softc *sc = (struct ixgbe_softc *) arg1;
struct ixgbe_hw *hw = &sc->hw;
u32 cmd, base, head;
u32 log_off, count = 0;
static int status = 0;
u8 data;
struct ixgbe_bypass_eeprom eeprom[BYPASS_MAX_LOGS];
int i, error = 0;
int i, error = 0;
error = sysctl_handle_int(oidp, &status, 0, req);
if ((error) || (req->newptr == NULL))
@ -639,12 +638,15 @@ ixgbe_bp_log(SYSCTL_HANDLER_ARGS)
BYPASS_LOG_EVENT_SHIFT;
u8 action = eeprom[count].actions & BYPASS_LOG_ACTION_M;
u16 day_mon[2][13] = {
{0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 365},
{0, 31, 59, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366}
{0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304,
334, 365},
{0, 31, 59, 91, 121, 152, 182, 213, 244, 274, 305,
335, 366}
};
char *event_str[] = {"unknown", "main on", "aux on",
"main off", "aux off", "WDT", "user" };
char *action_str[] = {"ignore", "normal", "bypass", "isolate",};
char *action_str[] =
{"ignore", "normal", "bypass", "isolate",};
/* verify vaild data 1 - 6 */
if (event < BYPASS_EVENT_MAIN_ON || event > BYPASS_EVENT_USR)
@ -711,11 +713,11 @@ unlock_err:
void
ixgbe_bypass_init(struct ixgbe_softc *sc)
{
struct ixgbe_hw *hw = &sc->hw;
device_t dev = sc->dev;
struct sysctl_oid *bp_node;
struct ixgbe_hw *hw = &sc->hw;
device_t dev = sc->dev;
struct sysctl_oid *bp_node;
struct sysctl_oid_list *bp_list;
u32 mask, value, sec, year;
u32 mask, value, sec, year;
if (!(sc->feat_cap & IXGBE_FEATURE_BYPASS))
return;
@ -723,13 +725,13 @@ ixgbe_bypass_init(struct ixgbe_softc *sc)
/* First set up time for the hardware */
ixgbe_get_bypass_time(&year, &sec);
mask = BYPASS_CTL1_TIME_M
| BYPASS_CTL1_VALID_M
| BYPASS_CTL1_OFFTRST_M;
mask = BYPASS_CTL1_TIME_M |
BYPASS_CTL1_VALID_M |
BYPASS_CTL1_OFFTRST_M;
value = (sec & BYPASS_CTL1_TIME_M)
| BYPASS_CTL1_VALID
| BYPASS_CTL1_OFFTRST;
value = (sec & BYPASS_CTL1_TIME_M) |
BYPASS_CTL1_VALID |
BYPASS_CTL1_OFFTRST;
ixgbe_bypass_mutex_enter(sc);
hw->mac.ops.bypass_set(hw, BYPASS_PAGE_CTL1, mask, value);

View File

@ -1,4 +1,4 @@
/******************************************************************************
/*****************************************************************************
Copyright (c) 2001-2017, Intel Corporation
All rights reserved.
@ -29,7 +29,7 @@
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
******************************************************************************/
*****************************************************************************/
#include "ixgbe.h"
@ -51,9 +51,9 @@ ixgbe_init_fdir(struct ixgbe_softc *sc)
void
ixgbe_reinit_fdir(void *context)
{
if_ctx_t ctx = context;
if_ctx_t ctx = context;
struct ixgbe_softc *sc = iflib_get_softc(ctx);
if_t ifp = iflib_get_ifp(ctx);
if_t ifp = iflib_get_ifp(ctx);
if (!(sc->feat_en & IXGBE_FEATURE_FDIR))
return;
@ -79,16 +79,16 @@ ixgbe_reinit_fdir(void *context)
void
ixgbe_atr(struct tx_ring *txr, struct mbuf *mp)
{
struct ixgbe_softc *sc = txr->sc;
struct ix_queue *que;
struct ip *ip;
struct tcphdr *th;
struct udphdr *uh;
struct ether_vlan_header *eh;
struct ixgbe_softc *sc = txr->sc;
struct ix_queue *que;
struct ip *ip;
struct tcphdr *th;
struct udphdr *uh;
struct ether_vlan_header *eh;
union ixgbe_atr_hash_dword input = {.dword = 0};
union ixgbe_atr_hash_dword common = {.dword = 0};
int ehdrlen, ip_hlen;
u16 etype;
int ehdrlen, ip_hlen;
u16 etype;
eh = mtod(mp, struct ether_vlan_header *);
if (eh->evl_encap_proto == htons(ETHERTYPE_VLAN)) {

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +1,4 @@
/******************************************************************************
/*****************************************************************************
Copyright (c) 2001-2017, Intel Corporation
All rights reserved.
@ -29,7 +29,7 @@
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
******************************************************************************/
*****************************************************************************/
#include "opt_inet.h"
@ -58,13 +58,18 @@ static const char ixv_driver_version[] = "2.0.1-k";
************************************************************************/
static const pci_vendor_info_t ixv_vendor_info_array[] =
{
PVID(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_VF, "Intel(R) X520 82599 Virtual Function"),
PVID(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X540_VF, "Intel(R) X540 Virtual Function"),
PVID(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550_VF, "Intel(R) X550 Virtual Function"),
PVID(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_X_VF, "Intel(R) X552 Virtual Function"),
PVID(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_VF, "Intel(R) X553 Virtual Function"),
PVID(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_VF,
"Intel(R) X520 82599 Virtual Function"),
PVID(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X540_VF,
"Intel(R) X540 Virtual Function"),
PVID(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550_VF,
"Intel(R) X550 Virtual Function"),
PVID(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_X_VF,
"Intel(R) X552 Virtual Function"),
PVID(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_VF,
"Intel(R) X553 Virtual Function"),
/* required last entry */
PVID_END
PVID_END
};
/************************************************************************
@ -76,8 +81,10 @@ static int ixv_if_attach_post(if_ctx_t);
static int ixv_if_detach(if_ctx_t);
static int ixv_if_rx_queue_intr_enable(if_ctx_t, uint16_t);
static int ixv_if_tx_queues_alloc(if_ctx_t, caddr_t *, uint64_t *, int, int);
static int ixv_if_rx_queues_alloc(if_ctx_t, caddr_t *, uint64_t *, int, int);
static int ixv_if_tx_queues_alloc(if_ctx_t, caddr_t *, uint64_t *, int,
int);
static int ixv_if_rx_queues_alloc(if_ctx_t, caddr_t *, uint64_t *, int,
int);
static void ixv_if_queues_free(if_ctx_t);
static void ixv_identify_hardware(if_ctx_t);
static void ixv_init_device_features(struct ixgbe_softc *);
@ -239,17 +246,17 @@ ixv_if_tx_queues_alloc(if_ctx_t ctx, caddr_t *vaddrs, uint64_t *paddrs,
int ntxqs, int ntxqsets)
{
struct ixgbe_softc *sc = iflib_get_softc(ctx);
if_softc_ctx_t scctx = sc->shared;
if_softc_ctx_t scctx = sc->shared;
struct ix_tx_queue *que;
int i, j, error;
int i, j, error;
MPASS(sc->num_tx_queues == ntxqsets);
MPASS(ntxqs == 1);
/* Allocate queue structure memory */
sc->tx_queues =
(struct ix_tx_queue *)malloc(sizeof(struct ix_tx_queue) * ntxqsets,
M_DEVBUF, M_NOWAIT | M_ZERO);
(struct ix_tx_queue *)malloc(sizeof(struct ix_tx_queue) *
ntxqsets, M_DEVBUF, M_NOWAIT | M_ZERO);
if (!sc->tx_queues) {
device_printf(iflib_get_dev(ctx),
"Unable to allocate TX ring memory\n");
@ -263,13 +270,14 @@ ixv_if_tx_queues_alloc(if_ctx_t ctx, caddr_t *vaddrs, uint64_t *paddrs,
txr->sc = que->sc = sc;
/* Allocate report status array */
if (!(txr->tx_rsq = (qidx_t *)malloc(sizeof(qidx_t) * scctx->isc_ntxd[0], M_DEVBUF, M_NOWAIT | M_ZERO))) {
if (!(txr->tx_rsq = (qidx_t *)malloc(sizeof(qidx_t) *
scctx->isc_ntxd[0], M_DEVBUF, M_NOWAIT | M_ZERO))) {
error = ENOMEM;
goto fail;
}
for (j = 0; j < scctx->isc_ntxd[0]; j++)
txr->tx_rsq[j] = QIDX_INVALID;
/* get the virtual and physical address of the hardware queues */
/* get virtual and physical address of the hardware queues */
txr->tail = IXGBE_VFTDT(txr->me);
txr->tx_base = (union ixgbe_adv_tx_desc *)vaddrs[i*ntxqs];
txr->tx_paddr = paddrs[i*ntxqs];
@ -299,15 +307,15 @@ ixv_if_rx_queues_alloc(if_ctx_t ctx, caddr_t *vaddrs, uint64_t *paddrs,
{
struct ixgbe_softc *sc = iflib_get_softc(ctx);
struct ix_rx_queue *que;
int i, error;
int i, error;
MPASS(sc->num_rx_queues == nrxqsets);
MPASS(nrxqs == 1);
/* Allocate queue structure memory */
sc->rx_queues =
(struct ix_rx_queue *)malloc(sizeof(struct ix_rx_queue) * nrxqsets,
M_DEVBUF, M_NOWAIT | M_ZERO);
(struct ix_rx_queue *)malloc(sizeof(struct ix_rx_queue) *
nrxqsets, M_DEVBUF, M_NOWAIT | M_ZERO);
if (!sc->rx_queues) {
device_printf(iflib_get_dev(ctx),
"Unable to allocate TX ring memory\n");
@ -348,7 +356,7 @@ ixv_if_queues_free(if_ctx_t ctx)
{
struct ixgbe_softc *sc = iflib_get_softc(ctx);
struct ix_tx_queue *que = sc->tx_queues;
int i;
int i;
if (que == NULL)
goto free;
@ -382,11 +390,11 @@ free:
static int
ixv_if_attach_pre(if_ctx_t ctx)
{
struct ixgbe_softc *sc;
device_t dev;
if_softc_ctx_t scctx;
struct ixgbe_softc *sc;
device_t dev;
if_softc_ctx_t scctx;
struct ixgbe_hw *hw;
int error = 0;
int error = 0;
INIT_DEBUGOUT("ixv_attach: begin");
@ -458,7 +466,7 @@ ixv_if_attach_pre(if_ctx_t ctx)
/* Check if VF was disabled by PF */
error = hw->mac.ops.get_link_state(hw, &sc->link_enabled);
if (error) {
/* PF is not capable of controlling VF state. Enable the link. */
/* PF is not capable of controlling VF state. Enable link. */
sc->link_enabled = true;
}
@ -522,8 +530,8 @@ static int
ixv_if_attach_post(if_ctx_t ctx)
{
struct ixgbe_softc *sc = iflib_get_softc(ctx);
device_t dev = iflib_get_dev(ctx);
int error = 0;
device_t dev = iflib_get_dev(ctx);
int error = 0;
/* Setup OS specific network interface */
error = ixv_setup_interface(ctx);
@ -568,7 +576,7 @@ ixv_if_mtu_set(if_ctx_t ctx, uint32_t mtu)
{
struct ixgbe_softc *sc = iflib_get_softc(ctx);
if_t ifp = iflib_get_ifp(ctx);
int error = 0;
int error = 0;
IOCTL_DEBUGOUT("ioctl: SIOCSIFMTU (Set Interface MTU)");
if (mtu > IXGBE_MAX_FRAME_SIZE - IXGBE_MTU_HDR) {
@ -596,9 +604,9 @@ ixv_if_init(if_ctx_t ctx)
{
struct ixgbe_softc *sc = iflib_get_softc(ctx);
if_t ifp = iflib_get_ifp(ctx);
device_t dev = iflib_get_dev(ctx);
device_t dev = iflib_get_dev(ctx);
struct ixgbe_hw *hw = &sc->hw;
int error = 0;
int error = 0;
INIT_DEBUGOUT("ixv_if_init: begin");
hw->adapter_stopped = false;
@ -670,8 +678,8 @@ static inline void
ixv_enable_queue(struct ixgbe_softc *sc, u32 vector)
{
struct ixgbe_hw *hw = &sc->hw;
u32 queue = 1 << vector;
u32 mask;
u32 queue = 1 << vector;
u32 mask;
mask = (IXGBE_EIMS_RTX_QUEUE & queue);
IXGBE_WRITE_REG(hw, IXGBE_VTEIMS, mask);
@ -684,8 +692,8 @@ static inline void
ixv_disable_queue(struct ixgbe_softc *sc, u32 vector)
{
struct ixgbe_hw *hw = &sc->hw;
u64 queue = (u64)(1 << vector);
u32 mask;
u64 queue = (u64)(1 << vector);
u32 mask;
mask = (IXGBE_EIMS_RTX_QUEUE & queue);
IXGBE_WRITE_REG(hw, IXGBE_VTEIMC, mask);
@ -699,7 +707,7 @@ static int
ixv_msix_que(void *arg)
{
struct ix_rx_queue *que = arg;
struct ixgbe_softc *sc = que->sc;
struct ixgbe_softc *sc = que->sc;
ixv_disable_queue(sc, que->msix);
++que->irqs;
@ -713,9 +721,9 @@ ixv_msix_que(void *arg)
static int
ixv_msix_mbx(void *arg)
{
struct ixgbe_softc *sc = arg;
struct ixgbe_softc *sc = arg;
struct ixgbe_hw *hw = &sc->hw;
u32 reg;
u32 reg;
++sc->link_irq;
@ -811,11 +819,13 @@ static int
ixv_negotiate_api(struct ixgbe_softc *sc)
{
struct ixgbe_hw *hw = &sc->hw;
int mbx_api[] = { ixgbe_mbox_api_12,
ixgbe_mbox_api_11,
ixgbe_mbox_api_10,
ixgbe_mbox_api_unknown };
int i = 0;
int mbx_api[] = {
ixgbe_mbox_api_12,
ixgbe_mbox_api_11,
ixgbe_mbox_api_10,
ixgbe_mbox_api_unknown
};
int i = 0;
while (mbx_api[i] != ixgbe_mbox_api_unknown) {
if (ixgbevf_negotiate_api_version(hw, mbx_api[i]) == 0)
@ -830,7 +840,8 @@ ixv_negotiate_api(struct ixgbe_softc *sc)
static u_int
ixv_if_multi_set_cb(void *cb_arg, struct sockaddr_dl *addr, u_int cnt)
{
bcopy(LLADDR(addr), &((u8 *)cb_arg)[cnt * IXGBE_ETH_LENGTH_OF_ADDRESS],
bcopy(LLADDR(addr),
&((u8 *)cb_arg)[cnt * IXGBE_ETH_LENGTH_OF_ADDRESS],
IXGBE_ETH_LENGTH_OF_ADDRESS);
return (++cnt);
@ -844,11 +855,11 @@ ixv_if_multi_set_cb(void *cb_arg, struct sockaddr_dl *addr, u_int cnt)
static void
ixv_if_multi_set(if_ctx_t ctx)
{
u8 mta[MAX_NUM_MULTICAST_ADDRESSES * IXGBE_ETH_LENGTH_OF_ADDRESS];
struct ixgbe_softc *sc = iflib_get_softc(ctx);
u8 *update_ptr;
if_t ifp = iflib_get_ifp(ctx);
int mcnt = 0;
u8 mta[MAX_NUM_MULTICAST_ADDRESSES * IXGBE_ETH_LENGTH_OF_ADDRESS];
struct ixgbe_softc *sc = iflib_get_softc(ctx);
u8 *update_ptr;
if_t ifp = iflib_get_ifp(ctx);
int mcnt = 0;
IOCTL_DEBUGOUT("ixv_if_multi_set: begin");
@ -908,8 +919,8 @@ static void
ixv_if_update_admin_status(if_ctx_t ctx)
{
struct ixgbe_softc *sc = iflib_get_softc(ctx);
device_t dev = iflib_get_dev(ctx);
s32 status;
device_t dev = iflib_get_dev(ctx);
s32 status;
sc->hw.mac.get_link_status = true;
@ -955,7 +966,7 @@ ixv_if_update_admin_status(if_ctx_t ctx)
static void
ixv_if_stop(if_ctx_t ctx)
{
struct ixgbe_softc *sc = iflib_get_softc(ctx);
struct ixgbe_softc *sc = iflib_get_softc(ctx);
struct ixgbe_hw *hw = &sc->hw;
INIT_DEBUGOUT("ixv_stop: begin\n");
@ -981,8 +992,8 @@ ixv_if_stop(if_ctx_t ctx)
static void
ixv_identify_hardware(if_ctx_t ctx)
{
struct ixgbe_softc *sc = iflib_get_softc(ctx);
device_t dev = iflib_get_dev(ctx);
struct ixgbe_softc *sc = iflib_get_softc(ctx);
device_t dev = iflib_get_dev(ctx);
struct ixgbe_hw *hw = &sc->hw;
/* Save off the information about this board */
@ -1023,22 +1034,24 @@ static int
ixv_if_msix_intr_assign(if_ctx_t ctx, int msix)
{
struct ixgbe_softc *sc = iflib_get_softc(ctx);
device_t dev = iflib_get_dev(ctx);
device_t dev = iflib_get_dev(ctx);
struct ix_rx_queue *rx_que = sc->rx_queues;
struct ix_tx_queue *tx_que;
int error, rid, vector = 0;
char buf[16];
int error, rid, vector = 0;
char buf[16];
for (int i = 0; i < sc->num_rx_queues; i++, vector++, rx_que++) {
rid = vector + 1;
snprintf(buf, sizeof(buf), "rxq%d", i);
error = iflib_irq_alloc_generic(ctx, &rx_que->que_irq, rid,
IFLIB_INTR_RXTX, ixv_msix_que, rx_que, rx_que->rxr.me, buf);
IFLIB_INTR_RXTX, ixv_msix_que, rx_que, rx_que->rxr.me,
buf);
if (error) {
device_printf(iflib_get_dev(ctx),
"Failed to allocate que int %d err: %d", i, error);
"Failed to allocate que int %d err: %d",
i, error);
sc->num_rx_queues = i + 1;
goto fail;
}
@ -1074,7 +1087,8 @@ ixv_if_msix_intr_assign(if_ctx_t ctx, int msix)
if (sc->hw.mac.type == ixgbe_mac_82599_vf) {
int msix_ctrl;
if (pci_find_cap(dev, PCIY_MSIX, &rid)) {
device_printf(dev, "Finding MSIX capability failed\n");
device_printf(dev,
"Finding MSIX capability failed\n");
} else {
rid += PCIR_MSIX_CTRL;
msix_ctrl = pci_read_config(dev, rid, 2);
@ -1101,21 +1115,21 @@ static int
ixv_allocate_pci_resources(if_ctx_t ctx)
{
struct ixgbe_softc *sc = iflib_get_softc(ctx);
device_t dev = iflib_get_dev(ctx);
int rid;
device_t dev = iflib_get_dev(ctx);
int rid;
rid = PCIR_BAR(0);
sc->pci_mem = bus_alloc_resource_any(dev, SYS_RES_MEMORY, &rid,
RF_ACTIVE);
if (!(sc->pci_mem)) {
device_printf(dev, "Unable to allocate bus resource: memory\n");
device_printf(dev,
"Unable to allocate bus resource: memory\n");
return (ENXIO);
}
sc->osdep.mem_bus_space_tag = rman_get_bustag(sc->pci_mem);
sc->osdep.mem_bus_space_handle =
rman_get_bushandle(sc->pci_mem);
sc->osdep.mem_bus_space_handle = rman_get_bushandle(sc->pci_mem);
sc->hw.hw_addr = (u8 *)&sc->osdep.mem_bus_space_handle;
return (0);
@ -1129,7 +1143,7 @@ ixv_free_pci_resources(if_ctx_t ctx)
{
struct ixgbe_softc *sc = iflib_get_softc(ctx);
struct ix_rx_queue *que = sc->rx_queues;
device_t dev = iflib_get_dev(ctx);
device_t dev = iflib_get_dev(ctx);
/* Release all MSI-X queue resources */
if (sc->intr_type == IFLIB_INTR_MSIX)
@ -1156,7 +1170,7 @@ ixv_setup_interface(if_ctx_t ctx)
{
struct ixgbe_softc *sc = iflib_get_softc(ctx);
if_softc_ctx_t scctx = sc->shared;
if_t ifp = iflib_get_ifp(ctx);
if_t ifp = iflib_get_ifp(ctx);
INIT_DEBUGOUT("ixv_setup_interface: begin");
@ -1178,7 +1192,7 @@ static uint64_t
ixv_if_get_counter(if_ctx_t ctx, ift_counter cnt)
{
struct ixgbe_softc *sc = iflib_get_softc(ctx);
if_t ifp = iflib_get_ifp(ctx);
if_t ifp = iflib_get_ifp(ctx);
switch (cnt) {
case IFCOUNTER_IPACKETS:
@ -1222,16 +1236,16 @@ static void
ixv_initialize_transmit_units(if_ctx_t ctx)
{
struct ixgbe_softc *sc = iflib_get_softc(ctx);
struct ixgbe_hw *hw = &sc->hw;
if_softc_ctx_t scctx = sc->shared;
struct ixgbe_hw *hw = &sc->hw;
if_softc_ctx_t scctx = sc->shared;
struct ix_tx_queue *que = sc->tx_queues;
int i;
int i;
for (i = 0; i < sc->num_tx_queues; i++, que++) {
struct tx_ring *txr = &que->txr;
u64 tdba = txr->tx_paddr;
u32 txctrl, txdctl;
int j = txr->me;
u64 tdba = txr->tx_paddr;
u32 txctrl, txdctl;
int j = txr->me;
/* Set WTHRESH to 8, burst writeback */
txdctl = IXGBE_READ_REG(hw, IXGBE_VFTXDCTL(j));
@ -1281,10 +1295,10 @@ static void
ixv_initialize_rss_mapping(struct ixgbe_softc *sc)
{
struct ixgbe_hw *hw = &sc->hw;
u32 reta = 0, mrqc, rss_key[10];
int queue_id;
int i, j;
u32 rss_hash_config;
u32 reta = 0, mrqc, rss_key[10];
int queue_id;
int i, j;
u32 rss_hash_config;
if (sc->feat_en & IXGBE_FEATURE_RSS) {
/* Fetch the configured RSS key */
@ -1351,18 +1365,21 @@ ixv_initialize_rss_mapping(struct ixgbe_softc *sc)
if (rss_hash_config & RSS_HASHTYPE_RSS_TCP_IPV6)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_TCP;
if (rss_hash_config & RSS_HASHTYPE_RSS_IPV6_EX)
device_printf(sc->dev, "%s: RSS_HASHTYPE_RSS_IPV6_EX defined, but not supported\n",
__func__);
device_printf(sc->dev,
"%s: RSS_HASHTYPE_RSS_IPV6_EX defined,"
" but not supported\n", __func__);
if (rss_hash_config & RSS_HASHTYPE_RSS_TCP_IPV6_EX)
device_printf(sc->dev, "%s: RSS_HASHTYPE_RSS_TCP_IPV6_EX defined, but not supported\n",
__func__);
device_printf(sc->dev,
"%s: RSS_HASHTYPE_RSS_TCP_IPV6_EX defined,"
" but not supported\n", __func__);
if (rss_hash_config & RSS_HASHTYPE_RSS_UDP_IPV4)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4_UDP;
if (rss_hash_config & RSS_HASHTYPE_RSS_UDP_IPV6)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_UDP;
if (rss_hash_config & RSS_HASHTYPE_RSS_UDP_IPV6_EX)
device_printf(sc->dev, "%s: RSS_HASHTYPE_RSS_UDP_IPV6_EX defined, but not supported\n",
__func__);
device_printf(sc->dev,
"%s: RSS_HASHTYPE_RSS_UDP_IPV6_EX defined,"
" but not supported\n", __func__);
IXGBE_WRITE_REG(hw, IXGBE_VFMRQC, mrqc);
} /* ixv_initialize_rss_mapping */
@ -1374,22 +1391,22 @@ static void
ixv_initialize_receive_units(if_ctx_t ctx)
{
struct ixgbe_softc *sc = iflib_get_softc(ctx);
if_softc_ctx_t scctx;
struct ixgbe_hw *hw = &sc->hw;
if_softc_ctx_t scctx;
struct ixgbe_hw *hw = &sc->hw;
#ifdef DEV_NETMAP
if_t ifp = iflib_get_ifp(ctx);
if_t ifp = iflib_get_ifp(ctx);
#endif
struct ix_rx_queue *que = sc->rx_queues;
u32 bufsz, psrtype;
u32 bufsz, psrtype;
bufsz = (sc->rx_mbuf_sz + BSIZEPKT_ROUNDUP) >>
IXGBE_SRRCTL_BSIZEPKT_SHIFT;
psrtype = IXGBE_PSRTYPE_TCPHDR
| IXGBE_PSRTYPE_UDPHDR
| IXGBE_PSRTYPE_IPV4HDR
| IXGBE_PSRTYPE_IPV6HDR
| IXGBE_PSRTYPE_L2HDR;
psrtype = IXGBE_PSRTYPE_TCPHDR |
IXGBE_PSRTYPE_UDPHDR |
IXGBE_PSRTYPE_IPV4HDR |
IXGBE_PSRTYPE_IPV6HDR |
IXGBE_PSRTYPE_L2HDR;
if (sc->num_rx_queues > 1)
psrtype |= 1 << 29;
@ -1398,15 +1415,18 @@ ixv_initialize_receive_units(if_ctx_t ctx)
/* Tell PF our max_frame size */
if (ixgbevf_rlpml_set_vf(hw, sc->max_frame_size) != 0) {
device_printf(sc->dev, "There is a problem with the PF setup. It is likely the receive unit for this VF will not function correctly.\n");
device_printf(sc->dev,
"There is a problem with the PF setup. It is likely the"
" receive unit for this VF will not function correctly."
"\n");
}
scctx = sc->shared;
for (int i = 0; i < sc->num_rx_queues; i++, que++) {
struct rx_ring *rxr = &que->rxr;
u64 rdba = rxr->rx_paddr;
u32 reg, rxdctl;
int j = rxr->me;
u64 rdba = rxr->rx_paddr;
u32 reg, rxdctl;
int j = rxr->me;
/* Disable the queue */
rxdctl = IXGBE_READ_REG(hw, IXGBE_VFRXDCTL(j));
@ -1497,10 +1517,10 @@ ixv_initialize_receive_units(if_ctx_t ctx)
static void
ixv_setup_vlan_support(if_ctx_t ctx)
{
if_t ifp = iflib_get_ifp(ctx);
struct ixgbe_softc *sc = iflib_get_softc(ctx);
if_t ifp = iflib_get_ifp(ctx);
struct ixgbe_softc *sc = iflib_get_softc(ctx);
struct ixgbe_hw *hw = &sc->hw;
u32 ctrl, vid, vfta, retry;
u32 ctrl, vid, vfta, retry;
/*
* We get here thru if_init, meaning
@ -1571,7 +1591,7 @@ static void
ixv_if_register_vlan(if_ctx_t ctx, u16 vtag)
{
struct ixgbe_softc *sc = iflib_get_softc(ctx);
u16 index, bit;
u16 index, bit;
index = (vtag >> 5) & 0x7F;
bit = vtag & 0x1F;
@ -1589,7 +1609,7 @@ static void
ixv_if_unregister_vlan(if_ctx_t ctx, u16 vtag)
{
struct ixgbe_softc *sc = iflib_get_softc(ctx);
u16 index, bit;
u16 index, bit;
index = (vtag >> 5) & 0x7F;
bit = vtag & 0x1F;
@ -1603,10 +1623,10 @@ ixv_if_unregister_vlan(if_ctx_t ctx, u16 vtag)
static void
ixv_if_enable_intr(if_ctx_t ctx)
{
struct ixgbe_softc *sc = iflib_get_softc(ctx);
struct ixgbe_softc *sc = iflib_get_softc(ctx);
struct ixgbe_hw *hw = &sc->hw;
struct ix_rx_queue *que = sc->rx_queues;
u32 mask = (IXGBE_EIMS_ENABLE_MASK & ~IXGBE_EIMS_RTX_QUEUE);
u32 mask = (IXGBE_EIMS_ENABLE_MASK & ~IXGBE_EIMS_RTX_QUEUE);
IXGBE_WRITE_REG(hw, IXGBE_VTEIMS, mask);
@ -1638,7 +1658,7 @@ ixv_if_disable_intr(if_ctx_t ctx)
static int
ixv_if_rx_queue_intr_enable(if_ctx_t ctx, uint16_t rxqid)
{
struct ixgbe_softc *sc = iflib_get_softc(ctx);
struct ixgbe_softc *sc = iflib_get_softc(ctx);
struct ix_rx_queue *que = &sc->rx_queues[rxqid];
ixv_enable_queue(sc, que->rxr.me);
@ -1658,7 +1678,7 @@ static void
ixv_set_ivar(struct ixgbe_softc *sc, u8 entry, u8 vector, s8 type)
{
struct ixgbe_hw *hw = &sc->hw;
u32 ivar, index;
u32 ivar, index;
vector |= IXGBE_IVAR_ALLOC_VAL;
@ -1808,18 +1828,18 @@ ixv_update_stats(struct ixgbe_softc *sc)
static void
ixv_add_stats_sysctls(struct ixgbe_softc *sc)
{
device_t dev = sc->dev;
struct ix_tx_queue *tx_que = sc->tx_queues;
struct ix_rx_queue *rx_que = sc->rx_queues;
struct sysctl_ctx_list *ctx = device_get_sysctl_ctx(dev);
struct sysctl_oid *tree = device_get_sysctl_tree(dev);
struct sysctl_oid_list *child = SYSCTL_CHILDREN(tree);
device_t dev = sc->dev;
struct ix_tx_queue *tx_que = sc->tx_queues;
struct ix_rx_queue *rx_que = sc->rx_queues;
struct sysctl_ctx_list *ctx = device_get_sysctl_ctx(dev);
struct sysctl_oid *tree = device_get_sysctl_tree(dev);
struct sysctl_oid_list *child = SYSCTL_CHILDREN(tree);
struct ixgbevf_hw_stats *stats = &sc->stats.vf;
struct sysctl_oid *stat_node, *queue_node;
struct sysctl_oid_list *stat_list, *queue_list;
struct sysctl_oid *stat_node, *queue_node;
struct sysctl_oid_list *stat_list, *queue_list;
#define QUEUE_NAME_LEN 32
char namebuf[QUEUE_NAME_LEN];
char namebuf[QUEUE_NAME_LEN];
/* Driver Statistics */
SYSCTL_ADD_ULONG(ctx, child, OID_AUTO, "watchdog_events",
@ -1922,9 +1942,9 @@ ixv_sysctl_debug(SYSCTL_HANDLER_ARGS)
static void
ixv_init_device_features(struct ixgbe_softc *sc)
{
sc->feat_cap = IXGBE_FEATURE_NETMAP
| IXGBE_FEATURE_VF
| IXGBE_FEATURE_LEGACY_TX;
sc->feat_cap = IXGBE_FEATURE_NETMAP |
IXGBE_FEATURE_VF |
IXGBE_FEATURE_LEGACY_TX;
/* A tad short on feature flags for VFs, atm. */
switch (sc->hw.mac.type) {

View File

@ -217,7 +217,7 @@ ixgbe_ping_all_vfs(struct ixgbe_softc *sc)
static void
ixgbe_vf_set_default_vlan(struct ixgbe_softc *sc, struct ixgbe_vf *vf,
uint16_t tag)
uint16_t tag)
{
struct ixgbe_hw *hw;
uint32_t vmolr, vmvir;
@ -269,7 +269,6 @@ ixgbe_clear_vfmbmem(struct ixgbe_softc *sc, struct ixgbe_vf *vf)
static boolean_t
ixgbe_vf_frame_size_compatible(struct ixgbe_softc *sc, struct ixgbe_vf *vf)
{
/*
* Frame size compatibility between PF and VF is only a problem on
* 82599-based cards. X540 and later support any combination of jumbo
@ -282,8 +281,8 @@ ixgbe_vf_frame_size_compatible(struct ixgbe_softc *sc, struct ixgbe_vf *vf)
case IXGBE_API_VER_1_0:
case IXGBE_API_VER_UNKNOWN:
/*
* On legacy (1.0 and older) VF versions, we don't support jumbo
* frames on either the PF or the VF.
* On legacy (1.0 and older) VF versions, we don't support
* jumbo frames on either the PF or the VF.
*/
if (sc->max_frame_size > ETHER_MAX_LEN ||
vf->maximum_frame_size > ETHER_MAX_LEN)
@ -302,8 +301,8 @@ ixgbe_vf_frame_size_compatible(struct ixgbe_softc *sc, struct ixgbe_vf *vf)
return (true);
/*
* Jumbo frames only work with VFs if the PF is also using jumbo
* frames.
* Jumbo frames only work with VFs if the PF is also using
* jumbo frames.
*/
if (sc->max_frame_size <= ETHER_MAX_LEN)
return (true);
@ -526,7 +525,7 @@ ixgbe_vf_set_lpe(struct ixgbe_softc *sc, struct ixgbe_vf *vf, uint32_t *msg)
static void
ixgbe_vf_set_macvlan(struct ixgbe_softc *sc, struct ixgbe_vf *vf,
uint32_t *msg)
uint32_t *msg)
{
//XXX implement this
ixgbe_send_vf_failure(sc, vf, msg[0]);
@ -537,7 +536,6 @@ static void
ixgbe_vf_api_negotiate(struct ixgbe_softc *sc, struct ixgbe_vf *vf,
uint32_t *msg)
{
switch (msg[1]) {
case IXGBE_API_VER_1_0:
case IXGBE_API_VER_1_1:
@ -553,7 +551,8 @@ ixgbe_vf_api_negotiate(struct ixgbe_softc *sc, struct ixgbe_vf *vf,
static void
ixgbe_vf_get_queues(struct ixgbe_softc *sc, struct ixgbe_vf *vf, uint32_t *msg)
ixgbe_vf_get_queues(struct ixgbe_softc *sc, struct ixgbe_vf *vf,
uint32_t *msg)
{
struct ixgbe_hw *hw;
uint32_t resp[IXGBE_VF_GET_QUEUES_RESP_LEN];
@ -585,9 +584,9 @@ ixgbe_vf_get_queues(struct ixgbe_softc *sc, struct ixgbe_vf *vf, uint32_t *msg)
static void
ixgbe_process_vf_msg(if_ctx_t ctx, struct ixgbe_vf *vf)
{
struct ixgbe_softc *sc = iflib_get_softc(ctx);
struct ixgbe_softc *sc = iflib_get_softc(ctx);
#ifdef KTR
if_t ifp = iflib_get_ifp(ctx);
if_t ifp = iflib_get_ifp(ctx);
#endif
struct ixgbe_hw *hw;
uint32_t msg[IXGBE_VFMAILBOX_SIZE];
@ -639,13 +638,12 @@ ixgbe_process_vf_msg(if_ctx_t ctx, struct ixgbe_vf *vf)
}
} /* ixgbe_process_vf_msg */
/* Tasklet for handling VF -> PF mailbox messages */
void
ixgbe_handle_mbx(void *context)
{
if_ctx_t ctx = context;
struct ixgbe_softc *sc = iflib_get_softc(ctx);
if_ctx_t ctx = context;
struct ixgbe_softc *sc = iflib_get_softc(ctx);
struct ixgbe_hw *hw;
struct ixgbe_vf *vf;
int i;
@ -656,13 +654,16 @@ ixgbe_handle_mbx(void *context)
vf = &sc->vfs[i];
if (vf->flags & IXGBE_VF_ACTIVE) {
if (hw->mbx.ops[vf->pool].check_for_rst(hw, vf->pool) == 0)
if (hw->mbx.ops[vf->pool].check_for_rst(hw,
vf->pool) == 0)
ixgbe_process_vf_reset(sc, vf);
if (hw->mbx.ops[vf->pool].check_for_msg(hw, vf->pool) == 0)
if (hw->mbx.ops[vf->pool].check_for_msg(hw,
vf->pool) == 0)
ixgbe_process_vf_msg(ctx, vf);
if (hw->mbx.ops[vf->pool].check_for_ack(hw, vf->pool) == 0)
if (hw->mbx.ops[vf->pool].check_for_ack(hw,
vf->pool) == 0)
ixgbe_process_vf_ack(sc, vf);
}
}
@ -799,27 +800,27 @@ ixgbe_initialize_iov(struct ixgbe_softc *sc)
/* RMW appropriate registers based on IOV mode */
/* Read... */
mrqc = IXGBE_READ_REG(hw, IXGBE_MRQC);
mrqc = IXGBE_READ_REG(hw, IXGBE_MRQC);
gcr_ext = IXGBE_READ_REG(hw, IXGBE_GCR_EXT);
gpie = IXGBE_READ_REG(hw, IXGBE_GPIE);
gpie = IXGBE_READ_REG(hw, IXGBE_GPIE);
/* Modify... */
mrqc &= ~IXGBE_MRQC_MRQE_MASK;
mtqc = IXGBE_MTQC_VT_ENA; /* No initial MTQC read needed */
gcr_ext |= IXGBE_GCR_EXT_MSIX_EN;
mrqc &= ~IXGBE_MRQC_MRQE_MASK;
mtqc = IXGBE_MTQC_VT_ENA; /* No initial MTQC read needed */
gcr_ext |= IXGBE_GCR_EXT_MSIX_EN;
gcr_ext &= ~IXGBE_GCR_EXT_VT_MODE_MASK;
gpie &= ~IXGBE_GPIE_VTMODE_MASK;
gpie &= ~IXGBE_GPIE_VTMODE_MASK;
switch (sc->iov_mode) {
case IXGBE_64_VM:
mrqc |= IXGBE_MRQC_VMDQRSS64EN;
mtqc |= IXGBE_MTQC_64VF;
mrqc |= IXGBE_MRQC_VMDQRSS64EN;
mtqc |= IXGBE_MTQC_64VF;
gcr_ext |= IXGBE_GCR_EXT_VT_MODE_64;
gpie |= IXGBE_GPIE_VTMODE_64;
gpie |= IXGBE_GPIE_VTMODE_64;
break;
case IXGBE_32_VM:
mrqc |= IXGBE_MRQC_VMDQRSS32EN;
mtqc |= IXGBE_MTQC_32VF;
mrqc |= IXGBE_MRQC_VMDQRSS32EN;
mtqc |= IXGBE_MTQC_32VF;
gcr_ext |= IXGBE_GCR_EXT_VT_MODE_32;
gpie |= IXGBE_GPIE_VTMODE_32;
gpie |= IXGBE_GPIE_VTMODE_32;
break;
default:
panic("Unexpected SR-IOV mode %d", sc->iov_mode);

View File

@ -1,4 +1,4 @@
/******************************************************************************
/*****************************************************************************
Copyright (c) 2001-2017, Intel Corporation
All rights reserved.
@ -29,7 +29,7 @@
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
******************************************************************************/
*****************************************************************************/
#ifndef IXGBE_STANDALONE_BUILD
#include "opt_inet.h"
@ -80,7 +80,7 @@ ixgbe_tx_ctx_setup(struct ixgbe_adv_tx_context_desc *TXD, if_pkt_info_t pi)
{
uint32_t vlan_macip_lens, type_tucmd_mlhl;
uint32_t olinfo_status, mss_l4len_idx, pktlen, offload;
u8 ehdrlen;
u8 ehdrlen;
offload = true;
olinfo_status = mss_l4len_idx = vlan_macip_lens = type_tucmd_mlhl = 0;
@ -105,9 +105,12 @@ ixgbe_tx_ctx_setup(struct ixgbe_adv_tx_context_desc *TXD, if_pkt_info_t pi)
/* First check if TSO is to be used */
if (pi->ipi_csum_flags & CSUM_TSO) {
/* This is used in the transmit desc in encap */
pktlen = pi->ipi_len - ehdrlen - pi->ipi_ip_hlen - pi->ipi_tcp_hlen;
mss_l4len_idx |= (pi->ipi_tso_segsz << IXGBE_ADVTXD_MSS_SHIFT);
mss_l4len_idx |= (pi->ipi_tcp_hlen << IXGBE_ADVTXD_L4LEN_SHIFT);
pktlen = pi->ipi_len - ehdrlen - pi->ipi_ip_hlen -
pi->ipi_tcp_hlen;
mss_l4len_idx |=
(pi->ipi_tso_segsz << IXGBE_ADVTXD_MSS_SHIFT);
mss_l4len_idx |=
(pi->ipi_tcp_hlen << IXGBE_ADVTXD_L4LEN_SHIFT);
}
olinfo_status |= pktlen << IXGBE_ADVTXD_PAYLEN_SHIFT;
@ -126,7 +129,8 @@ ixgbe_tx_ctx_setup(struct ixgbe_adv_tx_context_desc *TXD, if_pkt_info_t pi)
switch (pi->ipi_ipproto) {
case IPPROTO_TCP:
if (pi->ipi_csum_flags & (CSUM_IP_TCP | CSUM_IP6_TCP | CSUM_TSO))
if (pi->ipi_csum_flags &
(CSUM_IP_TCP | CSUM_IP6_TCP | CSUM_TSO))
type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_TCP;
else
offload = false;
@ -168,17 +172,17 @@ ixgbe_tx_ctx_setup(struct ixgbe_adv_tx_context_desc *TXD, if_pkt_info_t pi)
static int
ixgbe_isc_txd_encap(void *arg, if_pkt_info_t pi)
{
struct ixgbe_softc *sc = arg;
if_softc_ctx_t scctx = sc->shared;
struct ix_tx_queue *que = &sc->tx_queues[pi->ipi_qsidx];
struct tx_ring *txr = &que->txr;
int nsegs = pi->ipi_nsegs;
bus_dma_segment_t *segs = pi->ipi_segs;
union ixgbe_adv_tx_desc *txd = NULL;
struct ixgbe_softc *sc = arg;
if_softc_ctx_t scctx = sc->shared;
struct ix_tx_queue *que = &sc->tx_queues[pi->ipi_qsidx];
struct tx_ring *txr = &que->txr;
int nsegs = pi->ipi_nsegs;
bus_dma_segment_t *segs = pi->ipi_segs;
union ixgbe_adv_tx_desc *txd = NULL;
struct ixgbe_adv_tx_context_desc *TXD;
int i, j, first, pidx_last;
uint32_t olinfo_status, cmd, flags;
qidx_t ntxd;
int i, j, first, pidx_last;
uint32_t olinfo_status, cmd, flags;
qidx_t ntxd;
cmd = (IXGBE_ADVTXD_DTYP_DATA |
IXGBE_ADVTXD_DCMD_IFCS | IXGBE_ADVTXD_DCMD_DEXT);
@ -249,9 +253,9 @@ ixgbe_isc_txd_encap(void *arg, if_pkt_info_t pi)
static void
ixgbe_isc_txd_flush(void *arg, uint16_t txqid, qidx_t pidx)
{
struct ixgbe_softc *sc = arg;
struct ixgbe_softc *sc = arg;
struct ix_tx_queue *que = &sc->tx_queues[txqid];
struct tx_ring *txr = &que->txr;
struct tx_ring *txr = &que->txr;
IXGBE_WRITE_REG(&sc->hw, txr->tail, pidx);
} /* ixgbe_isc_txd_flush */
@ -263,14 +267,14 @@ static int
ixgbe_isc_txd_credits_update(void *arg, uint16_t txqid, bool clear)
{
struct ixgbe_softc *sc = arg;
if_softc_ctx_t scctx = sc->shared;
if_softc_ctx_t scctx = sc->shared;
struct ix_tx_queue *que = &sc->tx_queues[txqid];
struct tx_ring *txr = &que->txr;
qidx_t processed = 0;
int updated;
qidx_t cur, prev, ntxd, rs_cidx;
int32_t delta;
uint8_t status;
struct tx_ring *txr = &que->txr;
qidx_t processed = 0;
int updated;
qidx_t cur, prev, ntxd, rs_cidx;
int32_t delta;
uint8_t status;
rs_cidx = txr->tx_rs_cidx;
if (rs_cidx == txr->tx_rs_pidx)
@ -319,9 +323,9 @@ ixgbe_isc_txd_credits_update(void *arg, uint16_t txqid, bool clear)
static void
ixgbe_isc_rxd_refill(void *arg, if_rxd_update_t iru)
{
struct ixgbe_softc *sc = arg;
struct ix_rx_queue *que = &sc->rx_queues[iru->iru_qsidx];
struct rx_ring *rxr = &que->rxr;
struct ixgbe_softc *sc = arg;
struct ix_rx_queue *que = &sc->rx_queues[iru->iru_qsidx];
struct rx_ring *rxr = &que->rxr;
uint64_t *paddrs;
int i;
uint32_t next_pidx, pidx;
@ -342,11 +346,12 @@ ixgbe_isc_rxd_refill(void *arg, if_rxd_update_t iru)
* ixgbe_isc_rxd_flush
************************************************************************/
static void
ixgbe_isc_rxd_flush(void *arg, uint16_t qsidx, uint8_t flidx __unused, qidx_t pidx)
ixgbe_isc_rxd_flush(void *arg, uint16_t qsidx, uint8_t flidx __unused,
qidx_t pidx)
{
struct ixgbe_softc *sc = arg;
struct ixgbe_softc *sc = arg;
struct ix_rx_queue *que = &sc->rx_queues[qsidx];
struct rx_ring *rxr = &que->rxr;
struct rx_ring *rxr = &que->rxr;
IXGBE_WRITE_REG(&sc->hw, rxr->tail, pidx);
} /* ixgbe_isc_rxd_flush */
@ -357,12 +362,12 @@ ixgbe_isc_rxd_flush(void *arg, uint16_t qsidx, uint8_t flidx __unused, qidx_t pi
static int
ixgbe_isc_rxd_available(void *arg, uint16_t qsidx, qidx_t pidx, qidx_t budget)
{
struct ixgbe_softc *sc = arg;
struct ix_rx_queue *que = &sc->rx_queues[qsidx];
struct rx_ring *rxr = &que->rxr;
struct ixgbe_softc *sc = arg;
struct ix_rx_queue *que = &sc->rx_queues[qsidx];
struct rx_ring *rxr = &que->rxr;
union ixgbe_adv_rx_desc *rxd;
uint32_t staterr;
int cnt, i, nrxd;
uint32_t staterr;
int cnt, i, nrxd;
nrxd = sc->shared->isc_nrxd[0];
for (cnt = 0, i = pidx; cnt < nrxd && cnt <= budget;) {
@ -391,16 +396,16 @@ ixgbe_isc_rxd_available(void *arg, uint16_t qsidx, qidx_t pidx, qidx_t budget)
static int
ixgbe_isc_rxd_pkt_get(void *arg, if_rxd_info_t ri)
{
struct ixgbe_softc *sc = arg;
if_softc_ctx_t scctx = sc->shared;
struct ix_rx_queue *que = &sc->rx_queues[ri->iri_qsidx];
struct rx_ring *rxr = &que->rxr;
union ixgbe_adv_rx_desc *rxd;
struct ixgbe_softc *sc = arg;
if_softc_ctx_t scctx = sc->shared;
struct ix_rx_queue *que = &sc->rx_queues[ri->iri_qsidx];
struct rx_ring *rxr = &que->rxr;
union ixgbe_adv_rx_desc *rxd;
uint16_t pkt_info, len, cidx, i;
uint32_t ptype;
uint32_t staterr = 0;
bool eop;
uint16_t pkt_info, len, cidx, i;
uint32_t ptype;
uint32_t staterr = 0;
bool eop;
i = 0;
cidx = ri->iri_cidx;
@ -425,7 +430,8 @@ ixgbe_isc_rxd_pkt_get(void *arg, if_rxd_info_t ri)
/* Make sure bad packets are discarded */
if (eop && (staterr & IXGBE_RXDADV_ERR_FRAME_ERR_MASK) != 0) {
if (sc->feat_en & IXGBE_FEATURE_VF)
if_inc_counter(ri->iri_ifp, IFCOUNTER_IERRORS, 1);
if_inc_counter(ri->iri_ifp, IFCOUNTER_IERRORS,
1);
rxr->rx_discarded++;
return (EBADMSG);
@ -478,7 +484,8 @@ ixgbe_rx_checksum(uint32_t staterr, if_rxd_info_t ri, uint32_t ptype)
uint8_t errors = (uint8_t)(staterr >> 24);
/* If there is a layer 3 or 4 error we are done */
if (__predict_false(errors & (IXGBE_RXD_ERR_IPE | IXGBE_RXD_ERR_TCPE)))
if (__predict_false(errors &
(IXGBE_RXD_ERR_IPE | IXGBE_RXD_ERR_TCPE)))
return;
/* IP Checksum Good */
@ -492,7 +499,8 @@ ixgbe_rx_checksum(uint32_t staterr, if_rxd_info_t ri, uint32_t ptype)
(ptype & IXGBE_RXDADV_PKTTYPE_SCTP) != 0)) {
ri->iri_csum_flags |= CSUM_SCTP_VALID;
} else {
ri->iri_csum_flags |= CSUM_DATA_VALID | CSUM_PSEUDO_HDR;
ri->iri_csum_flags |=
CSUM_DATA_VALID | CSUM_PSEUDO_HDR;
ri->iri_csum_data = htons(0xffff);
}
}

View File

@ -1,4 +1,4 @@
/******************************************************************************
/*****************************************************************************
SPDX-License-Identifier: BSD-3-Clause
Copyright (c) 2001-2017, Intel Corporation
@ -30,7 +30,7 @@
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
******************************************************************************/
*****************************************************************************/
#ifndef _IXGBE_H_
#define _IXGBE_H_
@ -435,8 +435,8 @@ struct ixgbe_softc {
struct ixgbe_bp_data bypass;
/* Firmware error check */
int recovery_mode;
struct callout fw_mode_timer;
int recovery_mode;
struct callout fw_mode_timer;
/* Misc stats maintained by the driver */
unsigned long dropped_pkts;

View File

@ -723,7 +723,8 @@ struct mlx5_core_dev {
u32 vsc_addr;
u32 issi;
struct mlx5_special_contexts special_contexts;
unsigned int module_status[MLX5_MAX_PORTS];
unsigned int module_status;
unsigned int module_num;
struct mlx5_flow_root_namespace *root_ns;
struct mlx5_flow_root_namespace *fdb_root_ns;
struct mlx5_flow_root_namespace *esw_egress_root_ns;

View File

@ -690,9 +690,9 @@ static const char *mlx5_port_module_event_error_type_to_string(u8 error_type)
unsigned int mlx5_query_module_status(struct mlx5_core_dev *dev, int module_num)
{
if (module_num < 0 || module_num >= MLX5_MAX_PORTS)
return 0; /* undefined */
return dev->module_status[module_num];
if (module_num != dev->module_num)
return 0; /* module num doesn't equal to what FW reported */
return dev->module_status;
}
static void mlx5_port_module_event(struct mlx5_core_dev *dev,
@ -740,8 +740,8 @@ static void mlx5_port_module_event(struct mlx5_core_dev *dev,
"Module %u, unknown status %d\n", module_num, module_status);
}
/* store module status */
if (module_num < MLX5_MAX_PORTS)
dev->module_status[module_num] = module_status;
dev->module_status = module_status;
dev->module_num = module_num;
}
static void mlx5_port_general_notification_event(struct mlx5_core_dev *dev,

View File

@ -84,7 +84,7 @@ struct mlx5e_tls {
struct workqueue_struct *wq;
uma_zone_t zone;
uint32_t max_resources; /* max number of resources */
volatile uint32_t num_resources; /* current number of resources */
int zone_max;
int init; /* set when ready */
char zname[32];
};

View File

@ -81,17 +81,57 @@ static const char *mlx5e_tls_stats_desc[] = {
static void mlx5e_tls_work(struct work_struct *);
/*
* Expand the tls tag UMA zone in a sleepable context
*/
static void
mlx5e_prealloc_tags(struct mlx5e_priv *priv, int nitems)
{
struct mlx5e_tls_tag **tags;
int i;
tags = malloc(sizeof(tags[0]) * nitems,
M_MLX5E_TLS, M_WAITOK);
for (i = 0; i < nitems; i++)
tags[i] = uma_zalloc(priv->tls.zone, M_WAITOK);
__compiler_membar();
for (i = 0; i < nitems; i++)
uma_zfree(priv->tls.zone, tags[i]);
free(tags, M_MLX5E_TLS);
}
static int
mlx5e_tls_tag_import(void *arg, void **store, int cnt, int domain, int flags)
{
struct mlx5e_tls_tag *ptag;
int i;
struct mlx5e_priv *priv = arg;
int err, i;
/*
* mlx5_tls_open_tis() sleeps on a firmware command, so
* zone allocations must be done from a sleepable context.
* Note that the uma_zalloc() in mlx5e_tls_snd_tag_alloc()
* is done with M_NOWAIT so that hitting the zone limit does
* not cause the allocation to pause forever.
*/
for (i = 0; i != cnt; i++) {
ptag = malloc_domainset(sizeof(*ptag), M_MLX5E_TLS,
mlx5_dev_domainset(arg), flags | M_ZERO);
if (ptag == NULL)
return (i);
ptag->tls = &priv->tls;
mtx_init(&ptag->mtx, "mlx5-tls-tag-mtx", NULL, MTX_DEF);
INIT_WORK(&ptag->work, mlx5e_tls_work);
err = mlx5_tls_open_tis(priv->mdev, 0, priv->tdn,
priv->pdn, &ptag->tisn);
if (err) {
MLX5E_TLS_STAT_INC(ptag, tx_error, 1);
free(ptag, M_MLX5E_TLS);
return (i);
}
store[i] = ptag;
}
return (i);
@ -114,7 +154,6 @@ mlx5e_tls_tag_release(void *arg, void **store, int cnt)
if (ptag->tisn != 0) {
mlx5_tls_close_tis(priv->mdev, ptag->tisn);
atomic_add_32(&ptls->num_resources, -1U);
}
mtx_destroy(&ptag->mtx);
@ -136,20 +175,38 @@ mlx5e_tls_tag_zfree(struct mlx5e_tls_tag *ptag)
/* avoid leaking keys */
memset(ptag->crypto_params, 0, sizeof(ptag->crypto_params));
/* update number of TIS contexts */
if (ptag->tisn == 0)
atomic_add_32(&ptag->tls->num_resources, -1U);
/* return tag to UMA */
uma_zfree(ptag->tls->zone, ptag);
}
static int
mlx5e_max_tag_proc(SYSCTL_HANDLER_ARGS)
{
struct mlx5e_priv *priv = (struct mlx5e_priv *)arg1;
struct mlx5e_tls *ptls = &priv->tls;
int err;
unsigned int max_tags;
max_tags = ptls->zone_max;
err = sysctl_handle_int(oidp, &max_tags, arg2, req);
if (err != 0 || req->newptr == NULL )
return err;
if (max_tags == ptls->zone_max)
return 0;
if (max_tags > priv->tls.max_resources || max_tags == 0)
return (EINVAL);
ptls->zone_max = max_tags;
uma_zone_set_max(ptls->zone, ptls->zone_max);
return 0;
}
int
mlx5e_tls_init(struct mlx5e_priv *priv)
{
struct mlx5e_tls *ptls = &priv->tls;
struct sysctl_oid *node;
uint32_t x;
uint32_t max_dek, max_tis, x;
int zone_max = 0, prealloc_tags = 0;
if (MLX5_CAP_GEN(priv->mdev, tls_tx) == 0 ||
MLX5_CAP_GEN(priv->mdev, log_max_dek) == 0)
@ -164,13 +221,31 @@ mlx5e_tls_init(struct mlx5e_priv *priv)
snprintf(ptls->zname, sizeof(ptls->zname),
"mlx5_%u_tls", device_get_unit(priv->mdev->pdev->dev.bsddev));
TUNABLE_INT_FETCH("hw.mlx5.tls_max_tags", &zone_max);
TUNABLE_INT_FETCH("hw.mlx5.tls_prealloc_tags", &prealloc_tags);
ptls->zone = uma_zcache_create(ptls->zname,
sizeof(struct mlx5e_tls_tag), NULL, NULL, NULL, NULL,
mlx5e_tls_tag_import, mlx5e_tls_tag_release, priv->mdev,
UMA_ZONE_UNMANAGED);
mlx5e_tls_tag_import, mlx5e_tls_tag_release, priv,
UMA_ZONE_UNMANAGED | (prealloc_tags ? UMA_ZONE_NOFREE : 0));
/* shared between RX and TX TLS */
ptls->max_resources = 1U << (MLX5_CAP_GEN(priv->mdev, log_max_dek) - 1);
max_dek = 1U << (MLX5_CAP_GEN(priv->mdev, log_max_dek) - 1);
max_tis = 1U << (MLX5_CAP_GEN(priv->mdev, log_max_tis) - 1);
ptls->max_resources = MIN(max_dek, max_tis);
if (zone_max != 0) {
ptls->zone_max = zone_max;
if (ptls->zone_max > priv->tls.max_resources)
ptls->zone_max = priv->tls.max_resources;
} else {
ptls->zone_max = priv->tls.max_resources;
}
uma_zone_set_max(ptls->zone, ptls->zone_max);
if (prealloc_tags != 0)
mlx5e_prealloc_tags(priv, ptls->zone_max);
for (x = 0; x != MLX5E_TLS_STATS_NUM; x++)
ptls->stats.arg[x] = counter_u64_alloc(M_WAITOK);
@ -183,6 +258,10 @@ mlx5e_tls_init(struct mlx5e_priv *priv)
if (node == NULL)
return (0);
SYSCTL_ADD_PROC(&priv->sysctl_ctx, SYSCTL_CHILDREN(node), OID_AUTO, "tls_max_tag",
CTLFLAG_RW | CTLTYPE_UINT | CTLFLAG_MPSAFE, priv, 0, mlx5e_max_tag_proc,
"IU", "Max number of TLS offload session tags");
mlx5e_create_counter_stats(&ptls->ctx,
SYSCTL_CHILDREN(node), "stats",
mlx5e_tls_stats_desc, MLX5E_TLS_STATS_NUM,
@ -206,9 +285,6 @@ mlx5e_tls_cleanup(struct mlx5e_priv *priv)
uma_zdestroy(ptls->zone);
destroy_workqueue(ptls->wq);
/* check if all resources are freed */
MPASS(priv->tls.num_resources == 0);
for (x = 0; x != MLX5E_TLS_STATS_NUM; x++)
counter_u64_free(ptls->stats.arg[x]);
}
@ -225,7 +301,7 @@ mlx5e_tls_st_init(struct mlx5e_priv *priv, struct mlx5e_tls_tag *ptag)
priv->pdn, &ptag->tisn);
if (err) {
MLX5E_TLS_STAT_INC(ptag, tx_error, 1);
return (err);
return (-err);
}
}
MLX5_SET(sw_tls_cntx, ptag->crypto_params, progress.pd, ptag->tisn);
@ -238,7 +314,7 @@ mlx5e_tls_st_init(struct mlx5e_priv *priv, struct mlx5e_tls_tag *ptag)
&ptag->dek_index);
if (err) {
MLX5E_TLS_STAT_INC(ptag, tx_error, 1);
return (err);
return (-err);
}
MLX5_SET(sw_tls_cntx, ptag->crypto_params, param.dek_index, ptag->dek_index);
@ -318,8 +394,7 @@ mlx5e_tls_set_params(void *ctx, const struct tls_session_params *en)
CTASSERT(MLX5E_TLS_ST_INIT == 0);
int
mlx5e_tls_snd_tag_alloc(if_t ifp,
union if_snd_tag_alloc_params *params,
mlx5e_tls_snd_tag_alloc(if_t ifp, union if_snd_tag_alloc_params *params,
struct m_snd_tag **ppmt)
{
union if_snd_tag_alloc_params rl_params;
@ -334,28 +409,16 @@ mlx5e_tls_snd_tag_alloc(if_t ifp,
if (priv->gone != 0 || priv->tls.init == 0)
return (EOPNOTSUPP);
/* allocate new tag from zone, if any */
ptag = uma_zalloc(priv->tls.zone, M_WAITOK);
ptag = uma_zalloc(priv->tls.zone, M_NOWAIT);
if (ptag == NULL)
return (ENOMEM);
/* sanity check default values */
MPASS(ptag->dek_index == 0);
MPASS(ptag->dek_index_ok == 0);
/* setup TLS tag */
ptag->tls = &priv->tls;
/* check if there is no TIS context */
if (ptag->tisn == 0) {
uint32_t value;
value = atomic_fetchadd_32(&priv->tls.num_resources, 1U);
/* check resource limits */
if (value >= priv->tls.max_resources) {
error = ENOMEM;
goto failure;
}
}
KASSERT(ptag->tisn != 0, ("ptag %p w/0 tisn", ptag));
en = &params->tls.tls->params;
@ -448,17 +511,9 @@ mlx5e_tls_snd_tag_alloc(if_t ifp,
/* reset state */
ptag->state = MLX5E_TLS_ST_INIT;
/*
* Try to immediately init the tag. We may fail if the NIC's
* resources are tied up with send tags that are in the work
* queue, waiting to be freed. So if we fail, put ourselves
* on the queue so as to try again after resouces have been freed.
*/
error = mlx5e_tls_st_init(priv, ptag);
if (error != 0) {
queue_work(priv->tls.wq, &ptag->work);
flush_work(&ptag->work);
}
if (error != 0)
goto failure;
return (0);

View File

@ -3404,6 +3404,51 @@ mlx5e_set_rx_mode(if_t ifp)
queue_work(priv->wq, &priv->set_rx_mode_work);
}
static bool
mlx5e_is_ipsec_capable(struct mlx5_core_dev *mdev)
{
#ifdef IPSEC_OFFLOAD
if ((mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_PACKET_OFFLOAD) != 0)
return (true);
#endif
return (false);
}
static bool
mlx5e_is_ratelimit_capable(struct mlx5_core_dev *mdev)
{
#ifdef RATELIMIT
if (MLX5_CAP_GEN(mdev, qos) &&
MLX5_CAP_QOS(mdev, packet_pacing))
return (true);
#endif
return (false);
}
static bool
mlx5e_is_tlstx_capable(struct mlx5_core_dev *mdev)
{
#ifdef KERN_TLS
if (MLX5_CAP_GEN(mdev, tls_tx) != 0 &&
MLX5_CAP_GEN(mdev, log_max_dek) != 0)
return (true);
#endif
return (false);
}
static bool
mlx5e_is_tlsrx_capable(struct mlx5_core_dev *mdev)
{
#ifdef KERN_TLS
if (MLX5_CAP_GEN(mdev, tls_rx) != 0 &&
MLX5_CAP_GEN(mdev, log_max_dek) != 0 &&
MLX5_CAP_FLOWTABLE_NIC_RX(mdev,
ft_field_support.outer_ip_version) != 0)
return (true);
#endif
return (false);
}
static int
mlx5e_ioctl(if_t ifp, u_long command, caddr_t data)
{
@ -3507,6 +3552,24 @@ mlx5e_ioctl(if_t ifp, u_long command, caddr_t data)
drv_ioctl_data = (struct siocsifcapnv_driver_data *)data;
PRIV_LOCK(priv);
siocsifcap_driver:
if (!mlx5e_is_tlstx_capable(priv->mdev)) {
drv_ioctl_data->reqcap &= ~(IFCAP_TXTLS4 |
IFCAP_TXTLS6);
}
if (!mlx5e_is_tlsrx_capable(priv->mdev)) {
drv_ioctl_data->reqcap &= ~(
IFCAP2_BIT(IFCAP2_RXTLS4) |
IFCAP2_BIT(IFCAP2_RXTLS6));
}
if (!mlx5e_is_ipsec_capable(priv->mdev)) {
drv_ioctl_data->reqcap &=
~IFCAP2_BIT(IFCAP2_IPSEC_OFFLOAD);
}
if (!mlx5e_is_ratelimit_capable(priv->mdev)) {
drv_ioctl_data->reqcap &= ~(IFCAP_TXTLS_RTLMT |
IFCAP_TXRTLMT);
}
mask = drv_ioctl_data->reqcap ^ if_getcapenable(ifp);
if (mask & IFCAP_TXCSUM) {
@ -4535,29 +4598,20 @@ mlx5e_create_ifp(struct mlx5_core_dev *mdev)
if_setcapabilitiesbit(ifp, IFCAP_TSO | IFCAP_VLAN_HWTSO, 0);
if_setcapabilitiesbit(ifp, IFCAP_HWSTATS | IFCAP_HWRXTSTMP, 0);
if_setcapabilitiesbit(ifp, IFCAP_MEXTPG, 0);
#ifdef KERN_TLS
if (MLX5_CAP_GEN(mdev, tls_tx) != 0 &&
MLX5_CAP_GEN(mdev, log_max_dek) != 0)
if (mlx5e_is_tlstx_capable(mdev))
if_setcapabilitiesbit(ifp, IFCAP_TXTLS4 | IFCAP_TXTLS6, 0);
if (MLX5_CAP_GEN(mdev, tls_rx) != 0 &&
MLX5_CAP_GEN(mdev, log_max_dek) != 0 &&
MLX5_CAP_FLOWTABLE_NIC_RX(mdev,
ft_field_support.outer_ip_version) != 0)
if (mlx5e_is_tlsrx_capable(mdev))
if_setcapabilities2bit(ifp, IFCAP2_BIT(IFCAP2_RXTLS4) |
IFCAP2_BIT(IFCAP2_RXTLS6), 0);
#endif
#ifdef RATELIMIT
if (MLX5_CAP_GEN(mdev, qos) &&
MLX5_CAP_QOS(mdev, packet_pacing))
if_setcapabilitiesbit(ifp, IFCAP_TXRTLMT | IFCAP_TXTLS_RTLMT,
0);
#endif
if (mlx5e_is_ratelimit_capable(mdev)) {
if_setcapabilitiesbit(ifp, IFCAP_TXRTLMT, 0);
if (mlx5e_is_tlstx_capable(mdev))
if_setcapabilitiesbit(ifp, IFCAP_TXTLS_RTLMT, 0);
}
if_setcapabilitiesbit(ifp, IFCAP_VXLAN_HWCSUM | IFCAP_VXLAN_HWTSO, 0);
#ifdef IPSEC_OFFLOAD
if (mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_PACKET_OFFLOAD)
if (mlx5e_is_ipsec_capable(mdev))
if_setcapabilities2bit(ifp, IFCAP2_BIT(IFCAP2_IPSEC_OFFLOAD),
0);
#endif
if_setsndtagallocfn(ifp, mlx5e_snd_tag_alloc);
#ifdef RATELIMIT

View File

@ -67,6 +67,24 @@ struct dummy_softc {
struct mtx *lock;
};
static bool
dummy_active(struct dummy_softc *sc)
{
struct dummy_chan *ch;
int i;
snd_mtxassert(sc->lock);
for (i = 0; i < sc->chnum; i++) {
ch = &sc->chans[i];
if (ch->run)
return (true);
}
/* No channel is running at the moment. */
return (false);
}
static void
dummy_chan_io(void *arg)
{
@ -74,7 +92,9 @@ dummy_chan_io(void *arg)
struct dummy_chan *ch;
int i = 0;
snd_mtxlock(sc->lock);
/* Do not reschedule if no channel is running. */
if (!dummy_active(sc))
return;
for (i = 0; i < sc->chnum; i++) {
ch = &sc->chans[i];
@ -89,8 +109,6 @@ dummy_chan_io(void *arg)
snd_mtxlock(sc->lock);
}
callout_schedule(&sc->callout, 1);
snd_mtxunlock(sc->lock);
}
static int
@ -179,15 +197,15 @@ dummy_chan_trigger(kobj_t obj, void *data, int go)
switch (go) {
case PCMTRIG_START:
if (!callout_active(&sc->callout))
callout_reset(&sc->callout, 1, dummy_chan_io, sc);
ch->ptr = 0;
ch->run = 1;
callout_reset(&sc->callout, 1, dummy_chan_io, sc);
break;
case PCMTRIG_STOP:
case PCMTRIG_ABORT:
ch->run = 0;
if (callout_active(&sc->callout))
/* If all channels are stopped, stop the callout as well. */
if (!dummy_active(sc))
callout_stop(&sc->callout);
default:
break;
@ -292,6 +310,7 @@ dummy_attach(device_t dev)
sc = device_get_softc(dev);
sc->dev = dev;
sc->lock = snd_mtxcreate(device_get_nameunit(dev), "snd_dummy softc");
callout_init_mtx(&sc->callout, sc->lock, 0);
sc->cap_fmts[0] = SND_FORMAT(AFMT_S32_LE, 2, 0);
sc->cap_fmts[1] = SND_FORMAT(AFMT_S24_LE, 2, 0);
@ -316,7 +335,6 @@ dummy_attach(device_t dev)
if (pcm_register(dev, status))
return (ENXIO);
mixer_init(dev, &dummy_mixer_class, sc);
callout_init(&sc->callout, 1);
return (0);
}
@ -327,8 +345,8 @@ dummy_detach(device_t dev)
struct dummy_softc *sc = device_get_softc(dev);
int err;
callout_drain(&sc->callout);
err = pcm_unregister(dev);
callout_drain(&sc->callout);
snd_mtxfree(sc->lock);
return (err);

View File

@ -309,14 +309,7 @@ chn_wakeup(struct pcm_channel *c)
if (CHN_EMPTY(c, children.busy)) {
if (SEL_WAITING(sndbuf_getsel(bs)) && chn_polltrigger(c))
selwakeuppri(sndbuf_getsel(bs), PRIBIO);
if (c->flags & CHN_F_SLEEPING) {
/*
* Ok, I can just panic it right here since it is
* quite obvious that we never allow multiple waiters
* from userland. I'm too generous...
*/
CHN_BROADCAST(&c->intr_cv);
}
CHN_BROADCAST(&c->intr_cv);
} else {
CHN_FOREACH(ch, c, children.busy) {
CHN_LOCK(ch);
@ -332,15 +325,11 @@ chn_sleep(struct pcm_channel *c, int timeout)
int ret;
CHN_LOCKASSERT(c);
KASSERT((c->flags & CHN_F_SLEEPING) == 0,
("%s(): entered with CHN_F_SLEEPING", __func__));
if (c->flags & CHN_F_DEAD)
return (EINVAL);
c->flags |= CHN_F_SLEEPING;
ret = cv_timedwait_sig(&c->intr_cv, c->lock, timeout);
c->flags &= ~CHN_F_SLEEPING;
return ((c->flags & CHN_F_DEAD) ? EINVAL : ret);
}
@ -2318,44 +2307,46 @@ chn_trigger(struct pcm_channel *c, int go)
if (go == c->trigger)
return (0);
if (snd_verbose > 3) {
device_printf(c->dev, "%s() %s: calling go=0x%08x , "
"prev=0x%08x\n", __func__, c->name, go, c->trigger);
}
c->trigger = go;
ret = CHANNEL_TRIGGER(c->methods, c->devinfo, go);
if (ret != 0)
return (ret);
CHN_UNLOCK(c);
PCM_LOCK(d);
CHN_LOCK(c);
/*
* Do nothing if another thread set a different trigger while we had
* dropped the mutex.
*/
if (go != c->trigger) {
PCM_UNLOCK(d);
return (0);
}
/*
* Use the SAFE variants to prevent inserting/removing an already
* existing/missing element.
*/
switch (go) {
case PCMTRIG_START:
if (snd_verbose > 3)
device_printf(c->dev,
"%s() %s: calling go=0x%08x , "
"prev=0x%08x\n", __func__, c->name, go,
c->trigger);
if (c->trigger != PCMTRIG_START) {
c->trigger = go;
CHN_UNLOCK(c);
PCM_LOCK(d);
CHN_INSERT_HEAD(d, c, channels.pcm.busy);
PCM_UNLOCK(d);
CHN_LOCK(c);
chn_syncstate(c);
}
CHN_INSERT_HEAD_SAFE(d, c, channels.pcm.busy);
PCM_UNLOCK(d);
chn_syncstate(c);
break;
case PCMTRIG_STOP:
case PCMTRIG_ABORT:
if (snd_verbose > 3)
device_printf(c->dev,
"%s() %s: calling go=0x%08x , "
"prev=0x%08x\n", __func__, c->name, go,
c->trigger);
if (c->trigger == PCMTRIG_START) {
c->trigger = go;
CHN_UNLOCK(c);
PCM_LOCK(d);
CHN_REMOVE(d, c, channels.pcm.busy);
PCM_UNLOCK(d);
CHN_LOCK(c);
}
CHN_REMOVE_SAFE(d, c, channels.pcm.busy);
PCM_UNLOCK(d);
break;
default:
PCM_UNLOCK(d);
break;
}

View File

@ -354,7 +354,7 @@ enum {
#define CHN_F_RUNNING 0x00000004 /* dma is running */
#define CHN_F_TRIGGERED 0x00000008
#define CHN_F_NOTRIGGER 0x00000010
#define CHN_F_SLEEPING 0x00000020
/* unused 0x00000020 */
#define CHN_F_NBIO 0x00000040 /* do non-blocking i/o */
#define CHN_F_MMAP 0x00000080 /* has been mmap()ed */
@ -362,7 +362,7 @@ enum {
#define CHN_F_BUSY 0x00000100 /* has been opened */
#define CHN_F_DIRTY 0x00000200 /* need re-config */
#define CHN_F_DEAD 0x00000400 /* too many errors, dead, mdk */
#define CHN_F_SILENCE 0x00000800 /* silence, nil, null, yada */
/* unused 0x00000800 */
#define CHN_F_HAS_SIZE 0x00001000 /* user set block size */
#define CHN_F_HAS_VCHAN 0x00002000 /* vchan master */
@ -381,14 +381,14 @@ enum {
"\002ABORTING" \
"\003RUNNING" \
"\004TRIGGERED" \
/* \006 */ \
"\005NOTRIGGER" \
"\006SLEEPING" \
"\007NBIO" \
"\010MMAP" \
"\011BUSY" \
"\012DIRTY" \
"\013DEAD" \
"\014SILENCE" \
/* \014 */ \
"\015HAS_SIZE" \
"\016HAS_VCHAN" \
"\017VCHAN_PASSTHROUGH" \

View File

@ -137,7 +137,7 @@ dsp_destroy_dev(device_t dev)
struct snddev_info *d;
d = device_get_softc(dev);
destroy_dev_sched(d->dsp_dev);
destroy_dev(d->dsp_dev);
}
static void
@ -177,7 +177,7 @@ dsp_close(void *data)
d = priv->sc;
/* At this point pcm_unregister() will destroy all channels anyway. */
if (!DSP_REGISTERED(d) || PCM_DETACHING(d))
if (!DSP_REGISTERED(d))
goto skip;
PCM_GIANT_ENTER(d);
@ -264,7 +264,7 @@ dsp_open(struct cdev *i_dev, int flags, int mode, struct thread *td)
return (ENODEV);
d = i_dev->si_drv1;
if (!DSP_REGISTERED(d) || PCM_DETACHING(d))
if (!DSP_REGISTERED(d))
return (EBADF);
priv = malloc(sizeof(*priv), M_DEVBUF, M_WAITOK | M_ZERO);
@ -445,7 +445,7 @@ dsp_io_ops(struct dsp_cdevpriv *priv, struct uio *buf)
("%s(): io train wreck!", __func__));
d = priv->sc;
if (!DSP_REGISTERED(d) || PCM_DETACHING(d))
if (!DSP_REGISTERED(d))
return (EBADF);
PCM_GIANT_ENTER(d);
@ -664,7 +664,7 @@ dsp_ioctl(struct cdev *i_dev, u_long cmd, caddr_t arg, int mode,
return (err);
d = priv->sc;
if (!DSP_REGISTERED(d) || PCM_DETACHING(d))
if (!DSP_REGISTERED(d))
return (EBADF);
PCM_GIANT_ENTER(d);
@ -1783,7 +1783,7 @@ dsp_poll(struct cdev *i_dev, int events, struct thread *td)
if ((err = devfs_get_cdevpriv((void **)&priv)) != 0)
return (err);
d = priv->sc;
if (!DSP_REGISTERED(d) || PCM_DETACHING(d)) {
if (!DSP_REGISTERED(d)) {
/* XXX many clients don't understand POLLNVAL */
return (events & (POLLHUP | POLLPRI | POLLIN |
POLLRDNORM | POLLOUT | POLLWRNORM));
@ -1865,7 +1865,7 @@ dsp_mmap_single(struct cdev *i_dev, vm_ooffset_t *offset,
if ((err = devfs_get_cdevpriv((void **)&priv)) != 0)
return (err);
d = priv->sc;
if (!DSP_REGISTERED(d) || PCM_DETACHING(d))
if (!DSP_REGISTERED(d))
return (EINVAL);
PCM_GIANT_ENTER(d);

View File

@ -146,7 +146,7 @@ mixer_set_softpcmvol(struct snd_mixer *m, struct snddev_info *d,
struct pcm_channel *c;
int dropmtx, acquiremtx;
if (!PCM_REGISTERED(d) || PCM_DETACHING(d))
if (!PCM_REGISTERED(d))
return (EINVAL);
if (mtx_owned(m->lock))
@ -199,7 +199,7 @@ mixer_set_eq(struct snd_mixer *m, struct snddev_info *d,
else
return (EINVAL);
if (!PCM_REGISTERED(d) || PCM_DETACHING(d))
if (!PCM_REGISTERED(d))
return (EINVAL);
if (mtx_owned(m->lock))
@ -1053,7 +1053,7 @@ mixer_open(struct cdev *i_dev, int flags, int mode, struct thread *td)
m = i_dev->si_drv1;
d = device_get_softc(m->dev);
if (!PCM_REGISTERED(d) || PCM_DETACHING(d))
if (!PCM_REGISTERED(d))
return (EBADF);
/* XXX Need Giant magic entry ??? */
@ -1209,7 +1209,7 @@ mixer_ioctl(struct cdev *i_dev, u_long cmd, caddr_t arg, int mode,
return (EBADF);
d = device_get_softc(((struct snd_mixer *)i_dev->si_drv1)->dev);
if (!PCM_REGISTERED(d) || PCM_DETACHING(d))
if (!PCM_REGISTERED(d))
return (EBADF);
PCM_GIANT_ENTER(d);
@ -1447,7 +1447,7 @@ mixer_oss_mixerinfo(struct cdev *i_dev, oss_mixerinfo *mi)
for (i = 0; pcm_devclass != NULL &&
i < devclass_get_maxunit(pcm_devclass); i++) {
d = devclass_get_softc(pcm_devclass, i);
if (!PCM_REGISTERED(d) || PCM_DETACHING(d)) {
if (!PCM_REGISTERED(d)) {
if ((mi->dev == -1 && i == snd_unit) || mi->dev == i) {
mixer_oss_mixerinfo_unavail(mi, i);
return (0);

View File

@ -211,40 +211,53 @@ static void
pcm_killchans(struct snddev_info *d)
{
struct pcm_channel *ch;
bool found;
bool again;
PCM_BUSYASSERT(d);
do {
found = false;
KASSERT(!PCM_REGISTERED(d), ("%s(): still registered\n", __func__));
for (;;) {
again = false;
/* Make sure all channels are stopped. */
CHN_FOREACH(ch, d, channels.pcm) {
CHN_LOCK(ch);
/*
* Make sure no channel has went to sleep in the
* meantime.
*/
chn_shutdown(ch);
/*
* We have to give a thread sleeping in chn_sleep() a
* chance to observe that the channel is dead.
*/
if ((ch->flags & CHN_F_SLEEPING) == 0) {
found = true;
if (ch->intr_cv.cv_waiters == 0 && CHN_STOPPED(ch) &&
ch->inprog == 0) {
CHN_UNLOCK(ch);
break;
continue;
}
chn_shutdown(ch);
if (ch->direction == PCMDIR_PLAY)
chn_flush(ch);
else
chn_abort(ch);
CHN_UNLOCK(ch);
again = true;
}
/*
* All channels are still sleeping. Sleep for a bit and try
* again to see if any of them is awake now.
* Some channels are still active. Sleep for a bit and try
* again.
*/
if (!found) {
pause_sbt("pcmkillchans", SBT_1MS * 5, 0, 0);
continue;
}
if (again)
pause_sbt("pcmkillchans", mstosbt(5), 0, 0);
else
break;
}
/* All channels are finally dead. */
while (!CHN_EMPTY(d, channels.pcm)) {
ch = CHN_FIRST(d, channels.pcm);
chn_kill(ch);
} while (!CHN_EMPTY(d, channels.pcm));
}
if (d->p_unr != NULL)
delete_unrhdr(d->p_unr);
if (d->vp_unr != NULL)
delete_unrhdr(d->vp_unr);
if (d->r_unr != NULL)
delete_unrhdr(d->r_unr);
if (d->vr_unr != NULL)
delete_unrhdr(d->vr_unr);
}
static int
@ -512,7 +525,6 @@ int
pcm_unregister(device_t dev)
{
struct snddev_info *d;
struct pcm_channel *ch;
d = device_get_softc(dev);
@ -524,29 +536,14 @@ pcm_unregister(device_t dev)
PCM_LOCK(d);
PCM_WAIT(d);
d->flags |= SD_F_DETACHING;
d->flags &= ~SD_F_REGISTERED;
PCM_ACQUIRE(d);
PCM_UNLOCK(d);
CHN_FOREACH(ch, d, channels.pcm) {
CHN_LOCK(ch);
/*
* Do not wait for the timeout in chn_read()/chn_write(). Wake
* up the sleeping thread and kill the channel.
*/
chn_shutdown(ch);
chn_abort(ch);
CHN_UNLOCK(ch);
}
pcm_killchans(d);
/* remove /dev/sndstat entry first */
sndstat_unregister(dev);
PCM_LOCK(d);
d->flags |= SD_F_DYING;
d->flags &= ~SD_F_REGISTERED;
PCM_UNLOCK(d);
PCM_RELEASE_QUICK(d);
if (d->play_sysctl_tree != NULL) {
sysctl_ctx_free(&d->play_sysctl_ctx);
@ -557,24 +554,12 @@ pcm_unregister(device_t dev)
d->rec_sysctl_tree = NULL;
}
sndstat_unregister(dev);
mixer_uninit(dev);
dsp_destroy_dev(dev);
(void)mixer_uninit(dev);
pcm_killchans(d);
PCM_LOCK(d);
PCM_RELEASE(d);
cv_destroy(&d->cv);
PCM_UNLOCK(d);
snd_mtxfree(d->lock);
if (d->p_unr != NULL)
delete_unrhdr(d->p_unr);
if (d->vp_unr != NULL)
delete_unrhdr(d->vp_unr);
if (d->r_unr != NULL)
delete_unrhdr(d->r_unr);
if (d->vr_unr != NULL)
delete_unrhdr(d->vr_unr);
if (snd_unit == device_get_unit(dev)) {
snd_unit = pcm_best_unit(-1);

View File

@ -104,17 +104,15 @@ struct snd_mixer;
#define SD_F_SIMPLEX 0x00000001
#define SD_F_AUTOVCHAN 0x00000002
#define SD_F_SOFTPCMVOL 0x00000004
#define SD_F_DYING 0x00000008
#define SD_F_DETACHING 0x00000010
#define SD_F_BUSY 0x00000020
#define SD_F_MPSAFE 0x00000040
#define SD_F_REGISTERED 0x00000080
#define SD_F_BITPERFECT 0x00000100
#define SD_F_VPC 0x00000200 /* volume-per-channel */
#define SD_F_EQ 0x00000400 /* EQ */
#define SD_F_EQ_ENABLED 0x00000800 /* EQ enabled */
#define SD_F_EQ_BYPASSED 0x00001000 /* EQ bypassed */
#define SD_F_EQ_PC 0x00002000 /* EQ per-channel */
#define SD_F_BUSY 0x00000008
#define SD_F_MPSAFE 0x00000010
#define SD_F_REGISTERED 0x00000020
#define SD_F_BITPERFECT 0x00000040
#define SD_F_VPC 0x00000080 /* volume-per-channel */
#define SD_F_EQ 0x00000100 /* EQ */
#define SD_F_EQ_ENABLED 0x00000200 /* EQ enabled */
#define SD_F_EQ_BYPASSED 0x00000400 /* EQ bypassed */
#define SD_F_EQ_PC 0x00000800 /* EQ per-channel */
#define SD_F_EQ_DEFAULT (SD_F_EQ | SD_F_EQ_ENABLED)
#define SD_F_EQ_MASK (SD_F_EQ | SD_F_EQ_ENABLED | \
@ -127,26 +125,20 @@ struct snd_mixer;
"\001SIMPLEX" \
"\002AUTOVCHAN" \
"\003SOFTPCMVOL" \
"\004DYING" \
"\005DETACHING" \
"\006BUSY" \
"\007MPSAFE" \
"\010REGISTERED" \
"\011BITPERFECT" \
"\012VPC" \
"\013EQ" \
"\014EQ_ENABLED" \
"\015EQ_BYPASSED" \
"\016EQ_PC" \
"\004BUSY" \
"\005MPSAFE" \
"\006REGISTERED" \
"\007BITPERFECT" \
"\010VPC" \
"\011EQ" \
"\012EQ_ENABLED" \
"\013EQ_BYPASSED" \
"\014EQ_PC" \
"\035PRIO_RD" \
"\036PRIO_WR"
#define PCM_ALIVE(x) ((x) != NULL && (x)->lock != NULL && \
!((x)->flags & SD_F_DYING))
#define PCM_REGISTERED(x) (PCM_ALIVE(x) && \
((x)->flags & SD_F_REGISTERED))
#define PCM_DETACHING(x) ((x)->flags & SD_F_DETACHING)
#define PCM_ALIVE(x) ((x) != NULL && (x)->lock != NULL)
#define PCM_REGISTERED(x) (PCM_ALIVE(x) && ((x)->flags & SD_F_REGISTERED))
#define PCM_CHANCOUNT(d) \
(d->playcount + d->pvchancount + d->reccount + d->rvchancount)

View File

@ -146,20 +146,19 @@ vchan_trigger(kobj_t obj, void *data, int go)
int ret, otrigger;
info = data;
c = info->channel;
p = c->parentchannel;
CHN_LOCKASSERT(c);
if (!PCMTRIG_COMMON(go) || go == info->trigger)
return (0);
c = info->channel;
p = c->parentchannel;
otrigger = info->trigger;
info->trigger = go;
CHN_LOCKASSERT(c);
CHN_UNLOCK(c);
CHN_LOCK(p);
otrigger = info->trigger;
info->trigger = go;
switch (go) {
case PCMTRIG_START:
if (otrigger != PCMTRIG_START)

View File

@ -132,6 +132,9 @@ static VT_SYSCTL_INT(debug, 0, "vt(9) debug level");
static VT_SYSCTL_INT(deadtimer, 15, "Time to wait busy process in VT_PROCESS mode");
static VT_SYSCTL_INT(suspendswitch, 1, "Switch to VT0 before suspend");
/* Slow down and dont rely on timers and interrupts */
static VT_SYSCTL_INT(slow_down, 0, "Non-zero make console slower and synchronous.");
/* Allow to disable some keyboard combinations. */
static VT_SYSCTL_INT(kbd_halt, 1, "Enable halt keyboard combination. "
"See kbdmap(5) to configure.");
@ -1657,6 +1660,12 @@ vtterm_done(struct terminal *tm)
}
vd->vd_flags &= ~VDF_SPLASH;
vt_flush(vd);
} else if (vt_slow_down > 0) {
int i, j;
for (i = 0; i < vt_slow_down; i++) {
for (j = 0; j < 1000; j++)
vt_flush(vd);
}
} else if (!(vd->vd_flags & VDF_ASYNC)) {
vt_flush(vd);
}

View File

@ -225,7 +225,6 @@ static int nfs_bigreply[NFSV42_NPROCS] = { 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0,
static int nfsrv_skipace(struct nfsrv_descript *nd, int *acesizep);
static void nfsv4_wanted(struct nfsv4lock *lp);
static uint32_t nfsv4_filesavail(struct statfs *, struct mount *);
static int nfsrv_cmpmixedcase(u_char *cp, u_char *cp2, int len);
static int nfsrv_getuser(int procnum, uid_t uid, gid_t gid, char *name);
static void nfsrv_removeuser(struct nfsusrgrp *usrp, int isuser);
static int nfsrv_getrefstr(struct nfsrv_descript *, u_char **, u_char **,
@ -3438,13 +3437,13 @@ tryagain:
/*
* If an '@' is found and the domain name matches, search for
* the name with dns stripped off.
* Mixed case alphabetics will match for the domain name, but
* all upper case will not.
* The match for alphabetics in now case insensitive,
* since RFC8881 defines this string as a DNS domain name.
*/
if (cnt == 0 && i < len && i > 0 &&
(len - 1 - i) == NFSD_VNET(nfsrv_dnsnamelen) &&
!nfsrv_cmpmixedcase(cp,
NFSD_VNET(nfsrv_dnsname), NFSD_VNET(nfsrv_dnsnamelen))) {
strncasecmp(cp, NFSD_VNET(nfsrv_dnsname),
NFSD_VNET(nfsrv_dnsnamelen)) == 0) {
len -= (NFSD_VNET(nfsrv_dnsnamelen) + 1);
*(cp - 1) = '\0';
}
@ -3665,8 +3664,8 @@ tryagain:
*/
if (cnt == 0 && i < len && i > 0 &&
(len - 1 - i) == NFSD_VNET(nfsrv_dnsnamelen) &&
!nfsrv_cmpmixedcase(cp,
NFSD_VNET(nfsrv_dnsname), NFSD_VNET(nfsrv_dnsnamelen))) {
strncasecmp(cp, NFSD_VNET(nfsrv_dnsname),
NFSD_VNET(nfsrv_dnsnamelen)) == 0) {
len -= (NFSD_VNET(nfsrv_dnsnamelen) + 1);
*(cp - 1) = '\0';
}
@ -3714,35 +3713,6 @@ out:
return (error);
}
/*
* Cmp len chars, allowing mixed case in the first argument to match lower
* case in the second, but not if the first argument is all upper case.
* Return 0 for a match, 1 otherwise.
*/
static int
nfsrv_cmpmixedcase(u_char *cp, u_char *cp2, int len)
{
int i;
u_char tmp;
int fndlower = 0;
for (i = 0; i < len; i++) {
if (*cp >= 'A' && *cp <= 'Z') {
tmp = *cp++ + ('a' - 'A');
} else {
tmp = *cp++;
if (tmp >= 'a' && tmp <= 'z')
fndlower = 1;
}
if (tmp != *cp2++)
return (1);
}
if (fndlower)
return (0);
else
return (1);
}
/*
* Set the port for the nfsuserd.
*/

View File

@ -465,7 +465,6 @@ minidumpsys(struct dumperinfo *di, bool livedump)
struct minidumpstate state;
struct msgbuf mb_copy;
char *msg_ptr;
size_t sz;
int error;
if (livedump) {
@ -510,9 +509,10 @@ minidumpsys(struct dumperinfo *di, bool livedump)
msgbuf_duplicate(msgbufp, &mb_copy, msg_ptr);
state.msgbufp = &mb_copy;
sz = BITSET_SIZE(vm_page_dump_pages);
state.dump_bitset = malloc(sz, M_TEMP, M_WAITOK);
BIT_COPY_STORE_REL(sz, vm_page_dump, state.dump_bitset);
state.dump_bitset = BITSET_ALLOC(vm_page_dump_pages, M_TEMP,
M_WAITOK);
BIT_COPY_STORE_REL(vm_page_dump_pages, vm_page_dump,
state.dump_bitset);
} else {
KASSERT(dumping, ("minidump invoked outside of doadump()"));
@ -524,7 +524,7 @@ minidumpsys(struct dumperinfo *di, bool livedump)
error = cpu_minidumpsys(di, &state);
if (livedump) {
free(msg_ptr, M_TEMP);
free(state.dump_bitset, M_TEMP);
BITSET_FREE(state.dump_bitset, M_TEMP);
}
return (error);

View File

@ -365,6 +365,16 @@ sigqueue_start(void)
SIGFILLSET(fastblock_mask);
SIG_CANTMASK(fastblock_mask);
ast_register(TDA_SIG, ASTR_UNCOND, 0, ast_sig);
/*
* TDA_PSELECT is for the case where the signal mask should be restored
* before delivering any signals so that we do not deliver any that are
* blocked by the normal thread mask. It is mutually exclusive with
* TDA_SIGSUSPEND, which should be used if we *do* want to deliver
* signals that are normally blocked, e.g., if it interrupted our sleep.
*/
ast_register(TDA_PSELECT, ASTR_ASTF_REQUIRED | ASTR_TDP,
TDP_OLDMASK, ast_sigsuspend);
ast_register(TDA_SIGSUSPEND, ASTR_ASTF_REQUIRED | ASTR_TDP,
TDP_OLDMASK, ast_sigsuspend);
}

View File

@ -133,8 +133,10 @@ livedump_start_vnode(struct vnode *vp, int flags, uint8_t compression)
if (error != 0)
goto out;
curthread->td_pflags2 |= TDP2_SAN_QUIET;
dump_savectx();
error = minidumpsys(livedi, true);
curthread->td_pflags2 &= ~TDP2_SAN_QUIET;
EVENTHANDLER_INVOKE(livedumper_finish);
out:

View File

@ -405,6 +405,9 @@ kasan_shadow_check(unsigned long addr, size_t size, bool write,
if (__predict_false(!kasan_enabled))
return;
if (__predict_false(curthread != NULL &&
(curthread->td_pflags2 & TDP2_SAN_QUIET) != 0))
return;
if (__predict_false(size == 0))
return;
if (__predict_false(kasan_md_unsupported(addr)))

View File

@ -2591,7 +2591,13 @@ device_attach(device_t dev)
int error;
if (resource_disabled(dev->driver->name, dev->unit)) {
/*
* Mostly detach the device, but leave it attached to
* the devclass to reserve the name and unit.
*/
device_disable(dev);
(void)device_set_driver(dev, NULL);
dev->state = DS_NOTPRESENT;
if (bootverbose)
device_printf(dev, "disabled via hints entry\n");
return (ENXIO);
@ -5759,17 +5765,20 @@ devctl2_ioctl(struct cdev *cdev, u_long cmd, caddr_t data, int fflag,
* attach the device rather than doing a full probe.
*/
device_enable(dev);
if (device_is_alive(dev)) {
if (dev->devclass != NULL) {
/*
* If the device was disabled via a hint, clear
* the hint.
*/
if (resource_disabled(dev->driver->name, dev->unit))
resource_unset_value(dev->driver->name,
if (resource_disabled(dev->devclass->name, dev->unit))
resource_unset_value(dev->devclass->name,
dev->unit, "disabled");
error = device_attach(dev);
} else
error = device_probe_and_attach(dev);
/* Allow any drivers to rebid. */
if (!(dev->flags & DF_FIXEDCLASS))
devclass_delete_device(dev->devclass, dev);
}
error = device_probe_and_attach(dev);
break;
case DEV_DISABLE:
if (!device_is_enabled(dev)) {

View File

@ -179,6 +179,9 @@ kmsan_report_hook(const void *addr, msan_orig_t *orig, size_t size, size_t off,
if (__predict_false(KERNEL_PANICKED() || kdb_active || kmsan_reporting))
return;
if (__predict_false(curthread != NULL &&
(curthread->td_pflags2 & TDP2_SAN_QUIET) != 0))
return;
kmsan_reporting = true;
__compiler_membar();
@ -232,6 +235,9 @@ kmsan_report_inline(msan_orig_t orig, unsigned long pc)
if (__predict_false(KERNEL_PANICKED() || kdb_active || kmsan_reporting))
return;
if (__predict_false(curthread != NULL &&
(curthread->td_pflags2 & TDP2_SAN_QUIET) != 0))
return;
kmsan_reporting = true;
__compiler_membar();

View File

@ -1049,14 +1049,26 @@ kern_pselect(struct thread *td, int nd, fd_set *in, fd_set *ou, fd_set *ex,
if (error != 0)
return (error);
td->td_pflags |= TDP_OLDMASK;
}
error = kern_select(td, nd, in, ou, ex, tvp, abi_nfdbits);
if (uset != NULL) {
/*
* Make sure that ast() is called on return to
* usermode and TDP_OLDMASK is cleared, restoring old
* sigmask.
* sigmask. If we didn't get interrupted, then the caller is
* likely not expecting a signal to hit that should normally be
* blocked by its signal mask, so we restore the mask before
* any signals could be delivered.
*/
ast_sched(td, TDA_SIGSUSPEND);
if (error == EINTR) {
ast_sched(td, TDA_SIGSUSPEND);
} else {
/* *select(2) should never restart. */
MPASS(error != ERESTART);
ast_sched(td, TDA_PSELECT);
}
}
error = kern_select(td, nd, in, ou, ex, tvp, abi_nfdbits);
return (error);
}
@ -1528,12 +1540,6 @@ kern_poll_kfds(struct thread *td, struct pollfd *kfds, u_int nfds,
if (error)
return (error);
td->td_pflags |= TDP_OLDMASK;
/*
* Make sure that ast() is called on return to
* usermode and TDP_OLDMASK is cleared, restoring old
* sigmask.
*/
ast_sched(td, TDA_SIGSUSPEND);
}
seltdinit(td);
@ -1556,6 +1562,22 @@ kern_poll_kfds(struct thread *td, struct pollfd *kfds, u_int nfds,
error = EINTR;
if (error == EWOULDBLOCK)
error = 0;
if (uset != NULL) {
/*
* Make sure that ast() is called on return to
* usermode and TDP_OLDMASK is cleared, restoring old
* sigmask. If we didn't get interrupted, then the caller is
* likely not expecting a signal to hit that should normally be
* blocked by its signal mask, so we restore the mask before
* any signals could be delivered.
*/
if (error == EINTR)
ast_sched(td, TDA_SIGSUSPEND);
else
ast_sched(td, TDA_PSELECT);
}
return (error);
}

Some files were not shown because too many files have changed in this diff Show More