remove the logic in cm_Analyze that performs a flush of the server
data and retries when all of the servers are marked down (aka ALLOFFLINE).
Instead return an immediate error to the caller. The servers will be
checked by the background daemon thread and made available when they are.
* add a version of rxi_DebugPrint for Windows that uses OutputDebugString
* migrate all printf statements to the dpf macro
* stop masking the errors from rx_sendmsg() so that higher level functions
can make decisions based upon the failure.
* Windows reports EHOSTUNREACHABLE. Similar to Linux, if it is reported
reset the send packet start time to 0 in order to immediately cause the
server to be marked down.
heavily reworked by jaltman@secure-endpoints.com
and then a little further editing by me
see if we can avoid doing to sleep forever waiting on the tq to flush
====================
This delta was composed from multiple commits as part of the CVS->Git migration.
The checkin message with each commit was inconsistent.
The following are the additional commit messages.
====================
do not decrement tqWaiters in the while evaluation. This will
result in an invalid count if the value was zero to begin with.
Begin to store the Disk Volume Serial Number and Machine SID in the
AFSCache file for use in detecting system clones. Clones must get
a new UUID for the AFS Client.
"fs flushall" is like "fs flushvolume" but flushes all data in the cache
====================
This delta was composed from multiple commits as part of the CVS->Git migration.
The checkin message with each commit was inconsistent.
The following are the additional commit messages.
====================
typo
As stated in the afs-install-notes, the MS Client for Networks should
be enabled on the loopback adapter, so enable it.
Prevent an install failure by not calling CoInitialize twice in the same
thread.
When updating cell information from DNS, be sure to set a new timeout.
When obtaining cell information from a file, check every two hours to
see if it changed.
Add support to allow use of \\AFS\<foo> where <foo> is either a mount
point or symlink. As <foo> is normally treated as a share name, we
transform it into \\AFS\all\<foo> for processing.
Now that OAFW is ready for a stable series, we will default "fs trace"
to off on non-Debug builds. It can be set to on via the TraceOption
registry value. (see registry.txt)
It was reported that Microsoft Word when editing files stored in AFS
would cause OAFW to fail to respond. It was determined that a scp->mx
lock was not being released in buf_WaitIO if the no one was waiting
on the scp.
This patch corrects the deadlock and fixes some debugging messages.
The log message added to buf_LockedCleanAsync() during the debugging
post 1.3.8201 is output for 1/10th of all buffers once every 5 seconds.
This is a huge performance hit. Move the message so that it is only
output for buffers that are actually dirty.
Also, change the algorithm so that the sqrt() of the number of buffers
are checked every 5 seconds instead of 1/10th. This will do a better
job with very large cache sizes.
Added a new option for viewing the trace log data in real time
====================
This delta was composed from multiple commits as part of the CVS->Git migration.
The checkin message with each commit was inconsistent.
The following are the additional commit messages.
====================
Include the Thread ID in the output to make it usable for debugging
deadlocks.
====================
alter the afsd_init.log tag for the TraceOption to not be
Windows Event Log specific.
FIXES 20954
ConstructLocalPath only checks
the first argument (cpath) for needed translation from canonical to
local, but not the relativeTo path, which is simply prepended when
cpath doesn't begin with a '/'.
This patch tries to implement the afsd default tuning parameters
discussed in the thread starting at
https://www.openafs.org/pipermail/openafs-devel/2005-May/012158.html
I took the liberty of adding chunksize-tuning to the memcache too,
with the motivation that people using large memcaches usually wants
better bulk performance too.
It seems to work for me using both disk cache and memcache of various
sizes.