pub(crate) struct PEER_ID_CACHE {
pub(crate) __private_field: (),
}Fields§
§__private_field: ()Methods from Deref<Target = Cache<PeerId, OffchainPublicKey>>§
pub fn policy(&self) -> Policy
pub fn policy(&self) -> Policy
Returns a read-only cache policy of this cache.
At this time, cache policy cannot be modified after cache creation. A future version may support to modify it.
pub fn entry_count(&self) -> u64
pub fn entry_count(&self) -> u64
Returns an approximate number of entries in this cache.
The value returned is an estimate; the actual count may differ if there are
concurrent insertions or removals, or if some entries are pending removal due
to expiration. This inaccuracy can be mitigated by performing a
run_pending_tasks first.
§Example
use moka::sync::Cache;
let cache = Cache::new(10);
cache.insert('n', "Netherland Dwarf");
cache.insert('l', "Lop Eared");
cache.insert('d', "Dutch");
// Ensure an entry exists.
assert!(cache.contains_key(&'n'));
// However, followings may print stale number zeros instead of threes.
println!("{}", cache.entry_count()); // -> 0
println!("{}", cache.weighted_size()); // -> 0
// To mitigate the inaccuracy, Call `run_pending_tasks` method to run
// pending internal tasks.
cache.run_pending_tasks();
// Followings will print the actual numbers.
println!("{}", cache.entry_count()); // -> 3
println!("{}", cache.weighted_size()); // -> 3pub fn weighted_size(&self) -> u64
pub fn weighted_size(&self) -> u64
Returns an approximate total weighted size of entries in this cache.
The value returned is an estimate; the actual size may differ if there are
concurrent insertions or removals, or if some entries are pending removal due
to expiration. This inaccuracy can be mitigated by performing a
run_pending_tasks first. See entry_count for a
sample code.
pub fn contains_key<Q>(&self, key: &Q) -> bool
pub fn contains_key<Q>(&self, key: &Q) -> bool
Returns true if the cache contains a value for the key.
Unlike the get method, this method is not considered a cache read operation,
so it does not update the historic popularity estimator or reset the idle
timer for the key.
The key may be any borrowed form of the cache’s key type, but Hash and Eq
on the borrowed form must match those for the key type.
pub fn get<Q>(&self, key: &Q) -> Option<V>
pub fn get<Q>(&self, key: &Q) -> Option<V>
Returns a clone of the value corresponding to the key.
If you want to store values that will be expensive to clone, wrap them by
std::sync::Arc before storing in a cache. Arc is a
thread-safe reference-counted pointer and its clone() method is cheap.
The key may be any borrowed form of the cache’s key type, but Hash and Eq
on the borrowed form must match those for the key type.
pub fn entry(&self, key: K) -> OwnedKeyEntrySelector<'_, K, V, S>
pub fn entry(&self, key: K) -> OwnedKeyEntrySelector<'_, K, V, S>
Takes a key K and returns an OwnedKeyEntrySelector that can be used to
select or insert an entry.
§Example
use moka::sync::Cache;
let cache: Cache<String, u32> = Cache::new(100);
let key = "key1".to_string();
let entry = cache.entry(key.clone()).or_insert(3);
assert!(entry.is_fresh());
assert_eq!(entry.key(), &key);
assert_eq!(entry.into_value(), 3);
let entry = cache.entry(key).or_insert(6);
// Not fresh because the value was already in the cache.
assert!(!entry.is_fresh());
assert_eq!(entry.into_value(), 3);pub fn entry_by_ref<'a, Q>(
&'a self,
key: &'a Q,
) -> RefKeyEntrySelector<'a, K, Q, V, S>
pub fn entry_by_ref<'a, Q>( &'a self, key: &'a Q, ) -> RefKeyEntrySelector<'a, K, Q, V, S>
Takes a reference &Q of a key and returns an RefKeyEntrySelector that
can be used to select or insert an entry.
§Example
use moka::sync::Cache;
let cache: Cache<String, u32> = Cache::new(100);
let key = "key1".to_string();
let entry = cache.entry_by_ref(&key).or_insert(3);
assert!(entry.is_fresh());
assert_eq!(entry.key(), &key);
assert_eq!(entry.into_value(), 3);
let entry = cache.entry_by_ref(&key).or_insert(6);
// Not fresh because the value was already in the cache.
assert!(!entry.is_fresh());
assert_eq!(entry.into_value(), 3);pub fn get_with(&self, key: K, init: impl FnOnce() -> V) -> V
pub fn get_with(&self, key: K, init: impl FnOnce() -> V) -> V
Returns a clone of the value corresponding to the key. If the value does
not exist, evaluates the init closure and inserts the output.
§Concurrent calls on the same key
This method guarantees that concurrent calls on the same not-existing key are
coalesced into one evaluation of the init closure. Only one of the calls
evaluates its closure, and other calls wait for that closure to complete.
The following code snippet demonstrates this behavior:
use moka::sync::Cache;
use std::{sync::Arc, thread};
const TEN_MIB: usize = 10 * 1024 * 1024; // 10MiB
let cache = Cache::new(100);
// Spawn four threads.
let threads: Vec<_> = (0..4_u8)
.map(|task_id| {
let my_cache = cache.clone();
thread::spawn(move || {
println!("Thread {task_id} started.");
// Try to insert and get the value for key1. Although all four
// threads will call `get_with` at the same time, the `init` closure
// must be evaluated only once.
let value = my_cache.get_with("key1", || {
println!("Thread {task_id} inserting a value.");
Arc::new(vec![0u8; TEN_MIB])
});
// Ensure the value exists now.
assert_eq!(value.len(), TEN_MIB);
assert!(my_cache.get(&"key1").is_some());
println!("Thread {task_id} got the value. (len: {})", value.len());
})
})
.collect();
// Wait all threads to complete.
threads
.into_iter()
.for_each(|t| t.join().expect("Thread failed"));Result
- The
initclosure was called exactly once by thread 1. - Other threads were blocked until thread 1 inserted the value.
Thread 1 started.
Thread 0 started.
Thread 3 started.
Thread 2 started.
Thread 1 inserting a value.
Thread 2 got the value. (len: 10485760)
Thread 1 got the value. (len: 10485760)
Thread 0 got the value. (len: 10485760)
Thread 3 got the value. (len: 10485760)§Panics
This method panics when the init closure has panicked. When it happens,
only the caller whose init closure panicked will get the panic (e.g. only
thread 1 in the above sample). If there are other calls in progress (e.g.
thread 0, 2 and 3 above), this method will restart and resolve one of the
remaining init closure.
pub fn get_with_by_ref<Q>(&self, key: &Q, init: impl FnOnce() -> V) -> V
pub fn get_with_by_ref<Q>(&self, key: &Q, init: impl FnOnce() -> V) -> V
Similar to get_with, but instead of passing an owned
key, you can pass a reference to the key. If the key does not exist in the
cache, the key will be cloned to create new entry in the cache.
pub fn get_with_if(
&self,
key: K,
init: impl FnOnce() -> V,
replace_if: impl FnMut(&V) -> bool,
) -> V
👎Deprecated since 0.10.0: Replaced with entry().or_insert_with_if()
pub fn get_with_if( &self, key: K, init: impl FnOnce() -> V, replace_if: impl FnMut(&V) -> bool, ) -> V
entry().or_insert_with_if()TODO: Remove this in v0.13.0.
Deprecated, replaced with
entry()::or_insert_with_if()
pub fn optionally_get_with<F>(&self, key: K, init: F) -> Option<V>
pub fn optionally_get_with<F>(&self, key: K, init: F) -> Option<V>
Returns a clone of the value corresponding to the key. If the value does
not exist, evaluates the init closure, and inserts the value if
Some(value) was returned. If None was returned from the closure, this
method does not insert a value and returns None.
§Concurrent calls on the same key
This method guarantees that concurrent calls on the same not-existing key are
coalesced into one evaluation of the init closure. Only one of the calls
evaluates its closure, and other calls wait for that closure to complete.
The following code snippet demonstrates this behavior:
use moka::sync::Cache;
use std::{path::Path, thread};
/// This function tries to get the file size in bytes.
fn get_file_size(thread_id: u8, path: impl AsRef<Path>) -> Option<u64> {
println!("get_file_size() called by thread {thread_id}.");
std::fs::metadata(path).ok().map(|m| m.len())
}
let cache = Cache::new(100);
// Spawn four threads.
let threads: Vec<_> = (0..4_u8)
.map(|thread_id| {
let my_cache = cache.clone();
thread::spawn(move || {
println!("Thread {thread_id} started.");
// Try to insert and get the value for key1. Although all four
// threads will call `optionally_get_with` at the same time,
// get_file_size() must be called only once.
let value = my_cache.optionally_get_with(
"key1",
|| get_file_size(thread_id, "./Cargo.toml"),
);
// Ensure the value exists now.
assert!(value.is_some());
assert!(my_cache.get(&"key1").is_some());
println!(
"Thread {thread_id} got the value. (len: {})",
value.unwrap()
);
})
})
.collect();
// Wait all threads to complete.
threads
.into_iter()
.for_each(|t| t.join().expect("Thread failed"));Result
get_file_size()was called exactly once by thread 0.- Other threads were blocked until thread 0 inserted the value.
Thread 0 started.
Thread 1 started.
Thread 2 started.
get_file_size() called by thread 0.
Thread 3 started.
Thread 2 got the value. (len: 1466)
Thread 0 got the value. (len: 1466)
Thread 1 got the value. (len: 1466)
Thread 3 got the value. (len: 1466)§Panics
This method panics when the init closure has panicked. When it happens,
only the caller whose init closure panicked will get the panic (e.g. only
thread 1 in the above sample). If there are other calls in progress (e.g.
thread 0, 2 and 3 above), this method will restart and resolve one of the
remaining init closure.
pub fn optionally_get_with_by_ref<F, Q>(&self, key: &Q, init: F) -> Option<V>
pub fn optionally_get_with_by_ref<F, Q>(&self, key: &Q, init: F) -> Option<V>
Similar to optionally_get_with, but instead
of passing an owned key, you can pass a reference to the key. If the key does
not exist in the cache, the key will be cloned to create new entry in the
cache.
pub fn try_get_with<F, E>(&self, key: K, init: F) -> Result<V, Arc<E>>
pub fn try_get_with<F, E>(&self, key: K, init: F) -> Result<V, Arc<E>>
Returns a clone of the value corresponding to the key. If the value does
not exist, evaluates the init closure, and inserts the value if Ok(value)
was returned. If Err(_) was returned from the closure, this method does not
insert a value and returns the Err wrapped by std::sync::Arc.
§Concurrent calls on the same key
This method guarantees that concurrent calls on the same not-existing key are
coalesced into one evaluation of the init closure (as long as these
closures return the same error type). Only one of the calls evaluates its
closure, and other calls wait for that closure to complete.
The following code snippet demonstrates this behavior:
use moka::sync::Cache;
use std::{path::Path, thread};
/// This function tries to get the file size in bytes.
fn get_file_size(thread_id: u8, path: impl AsRef<Path>) -> Result<u64, std::io::Error> {
println!("get_file_size() called by thread {thread_id}.");
Ok(std::fs::metadata(path)?.len())
}
let cache = Cache::new(100);
// Spawn four threads.
let threads: Vec<_> = (0..4_u8)
.map(|thread_id| {
let my_cache = cache.clone();
thread::spawn(move || {
println!("Thread {thread_id} started.");
// Try to insert and get the value for key1. Although all four
// threads will call `try_get_with` at the same time,
// get_file_size() must be called only once.
let value = my_cache.try_get_with(
"key1",
|| get_file_size(thread_id, "./Cargo.toml"),
);
// Ensure the value exists now.
assert!(value.is_ok());
assert!(my_cache.get(&"key1").is_some());
println!(
"Thread {thread_id} got the value. (len: {})",
value.unwrap()
);
})
})
.collect();
// Wait all threads to complete.
threads
.into_iter()
.for_each(|t| t.join().expect("Thread failed"));Result
get_file_size()was called exactly once by thread 1.- Other threads were blocked until thread 1 inserted the value.
Thread 1 started.
Thread 2 started.
get_file_size() called by thread 1.
Thread 3 started.
Thread 0 started.
Thread 2 got the value. (len: 1466)
Thread 0 got the value. (len: 1466)
Thread 1 got the value. (len: 1466)
Thread 3 got the value. (len: 1466)§Panics
This method panics when the init closure has panicked. When it happens,
only the caller whose init closure panicked will get the panic (e.g. only
thread 1 in the above sample). If there are other calls in progress (e.g.
thread 0, 2 and 3 above), this method will restart and resolve one of the
remaining init closure.
pub fn try_get_with_by_ref<F, E, Q>(
&self,
key: &Q,
init: F,
) -> Result<V, Arc<E>>
pub fn try_get_with_by_ref<F, E, Q>( &self, key: &Q, init: F, ) -> Result<V, Arc<E>>
Similar to try_get_with, but instead of passing an
owned key, you can pass a reference to the key. If the key does not exist in
the cache, the key will be cloned to create new entry in the cache.
pub fn insert(&self, key: K, value: V)
pub fn insert(&self, key: K, value: V)
Inserts a key-value pair into the cache.
If the cache has this key present, the value is updated.
pub fn invalidate<Q>(&self, key: &Q)
pub fn invalidate<Q>(&self, key: &Q)
Discards any cached value for the key.
If you need to get a the value that has been discarded, use the
remove method instead.
The key may be any borrowed form of the cache’s key type, but Hash and Eq
on the borrowed form must match those for the key type.
pub fn remove<Q>(&self, key: &Q) -> Option<V>
pub fn remove<Q>(&self, key: &Q) -> Option<V>
Discards any cached value for the key and returns a clone of the value.
If you do not need to get the value that has been discarded, use the
invalidate method instead.
The key may be any borrowed form of the cache’s key type, but Hash and Eq
on the borrowed form must match those for the key type.
pub fn invalidate_all(&self)
pub fn invalidate_all(&self)
Discards all cached values.
This method returns immediately by just setting the current time as the
invalidation time. get and other retrieval methods are guaranteed not to
return the entries inserted before or at the invalidation time.
The actual removal of the invalidated entries is done as a maintenance task driven by a user thread. For more details, see the Maintenance Tasks section in the crate level documentation.
Like the invalidate method, this method does not clear the historic
popularity estimator of keys so that it retains the client activities of
trying to retrieve an item.
pub fn invalidate_entries_if<F>(
&self,
predicate: F,
) -> Result<String, PredicateError>
pub fn invalidate_entries_if<F>( &self, predicate: F, ) -> Result<String, PredicateError>
Discards cached values that satisfy a predicate.
invalidate_entries_if takes a closure that returns true or false. The
closure is called against each cached entry inserted before or at the time
when this method was called. If the closure returns true that entry will be
evicted from the cache.
This method returns immediately by not actually removing the invalidated entries. Instead, it just sets the predicate to the cache with the time when this method was called. The actual removal of the invalidated entries is done as a maintenance task driven by a user thread. For more details, see the Maintenance Tasks section in the crate level documentation.
Also the get and other retrieval methods will apply the closure to a cached
entry to determine if it should have been invalidated. Therefore, it is
guaranteed that these methods must not return invalidated values.
Note that you must call
CacheBuilder::support_invalidation_closures
at the cache creation time as the cache needs to maintain additional internal
data structures to support this method. Otherwise, calling this method will
fail with a
PredicateError::InvalidationClosuresDisabled.
Like the invalidate method, this method does not clear the historic
popularity estimator of keys so that it retains the client activities of
trying to retrieve an item.
pub fn iter(&self) -> Iter<'_, K, V>
pub fn iter(&self) -> Iter<'_, K, V>
Creates an iterator visiting all key-value pairs in arbitrary order. The
iterator element type is (Arc<K>, V), where V is a clone of a stored
value.
Iterators do not block concurrent reads and writes on the cache. An entry can be inserted to, invalidated or evicted from a cache while iterators are alive on the same cache.
Unlike the get method, visiting entries via an iterator do not update the
historic popularity estimator or reset idle timers for keys.
§Guarantees
In order to allow concurrent access to the cache, iterator’s next method
does not guarantee the following:
- It does not guarantee to return a key-value pair (an entry) if its key has
been inserted to the cache after the iterator was created.
- Such an entry may or may not be returned depending on key’s hash and timing.
and the next method guarantees the followings:
- It guarantees not to return the same entry more than once.
- It guarantees not to return an entry if it has been removed from the cache
after the iterator was created.
- Note: An entry can be removed by following reasons:
- Manually invalidated.
- Expired (e.g. time-to-live).
- Evicted as the cache capacity exceeded.
- Note: An entry can be removed by following reasons:
§Examples
use moka::sync::Cache;
let cache = Cache::new(100);
cache.insert("Julia", 14);
let mut iter = cache.iter();
let (k, v) = iter.next().unwrap(); // (Arc<K>, V)
assert_eq!(*k, "Julia");
assert_eq!(v, 14);
assert!(iter.next().is_none());pub fn run_pending_tasks(&self)
pub fn run_pending_tasks(&self)
Performs any pending maintenance operations needed by the cache.
Trait Implementations§
Source§impl Deref for PEER_ID_CACHE
impl Deref for PEER_ID_CACHE
Source§type Target = Cache<PeerId, OffchainPublicKey>
type Target = Cache<PeerId, OffchainPublicKey>
Source§fn deref(&self) -> &Cache<PeerId, OffchainPublicKey>
fn deref(&self) -> &Cache<PeerId, OffchainPublicKey>
impl LazyStatic for PEER_ID_CACHE
Auto Trait Implementations§
impl Freeze for PEER_ID_CACHE
impl RefUnwindSafe for PEER_ID_CACHE
impl Send for PEER_ID_CACHE
impl Sync for PEER_ID_CACHE
impl Unpin for PEER_ID_CACHE
impl UnwindSafe for PEER_ID_CACHE
Blanket Implementations§
§impl<'a, T, E> AsTaggedExplicit<'a, E> for Twhere
T: 'a,
impl<'a, T, E> AsTaggedExplicit<'a, E> for Twhere
T: 'a,
§impl<'a, T, E> AsTaggedImplicit<'a, E> for Twhere
T: 'a,
impl<'a, T, E> AsTaggedImplicit<'a, E> for Twhere
T: 'a,
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Conv for T
impl<T> Conv for T
§impl<T> FmtForward for T
impl<T> FmtForward for T
§fn fmt_binary(self) -> FmtBinary<Self>where
Self: Binary,
fn fmt_binary(self) -> FmtBinary<Self>where
Self: Binary,
self to use its Binary implementation when Debug-formatted.§fn fmt_display(self) -> FmtDisplay<Self>where
Self: Display,
fn fmt_display(self) -> FmtDisplay<Self>where
Self: Display,
self to use its Display implementation when
Debug-formatted.§fn fmt_lower_exp(self) -> FmtLowerExp<Self>where
Self: LowerExp,
fn fmt_lower_exp(self) -> FmtLowerExp<Self>where
Self: LowerExp,
self to use its LowerExp implementation when
Debug-formatted.§fn fmt_lower_hex(self) -> FmtLowerHex<Self>where
Self: LowerHex,
fn fmt_lower_hex(self) -> FmtLowerHex<Self>where
Self: LowerHex,
self to use its LowerHex implementation when
Debug-formatted.§fn fmt_octal(self) -> FmtOctal<Self>where
Self: Octal,
fn fmt_octal(self) -> FmtOctal<Self>where
Self: Octal,
self to use its Octal implementation when Debug-formatted.§fn fmt_pointer(self) -> FmtPointer<Self>where
Self: Pointer,
fn fmt_pointer(self) -> FmtPointer<Self>where
Self: Pointer,
self to use its Pointer implementation when
Debug-formatted.§fn fmt_upper_exp(self) -> FmtUpperExp<Self>where
Self: UpperExp,
fn fmt_upper_exp(self) -> FmtUpperExp<Self>where
Self: UpperExp,
self to use its UpperExp implementation when
Debug-formatted.§fn fmt_upper_hex(self) -> FmtUpperHex<Self>where
Self: UpperHex,
fn fmt_upper_hex(self) -> FmtUpperHex<Self>where
Self: UpperHex,
self to use its UpperHex implementation when
Debug-formatted.§fn fmt_list(self) -> FmtList<Self>where
&'a Self: for<'a> IntoIterator,
fn fmt_list(self) -> FmtList<Self>where
&'a Self: for<'a> IntoIterator,
§impl<T> FutureExt for T
impl<T> FutureExt for T
§fn with_context(self, otel_cx: Context) -> WithContext<Self>
fn with_context(self, otel_cx: Context) -> WithContext<Self>
§fn with_current_context(self) -> WithContext<Self>
fn with_current_context(self) -> WithContext<Self>
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more§impl<T> Pipe for Twhere
T: ?Sized,
impl<T> Pipe for Twhere
T: ?Sized,
§fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> Rwhere
Self: Sized,
fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> Rwhere
Self: Sized,
§fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> Rwhere
R: 'a,
fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> Rwhere
R: 'a,
self and passes that borrow into the pipe function. Read more§fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> Rwhere
R: 'a,
fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> Rwhere
R: 'a,
self and passes that borrow into the pipe function. Read more§fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> R
fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> R
§fn pipe_borrow_mut<'a, B, R>(
&'a mut self,
func: impl FnOnce(&'a mut B) -> R,
) -> R
fn pipe_borrow_mut<'a, B, R>( &'a mut self, func: impl FnOnce(&'a mut B) -> R, ) -> R
§fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> R
fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> R
self, then passes self.as_ref() into the pipe function.§fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> R
fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> R
self, then passes self.as_mut() into the pipe
function.§fn pipe_deref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R
fn pipe_deref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R
self, then passes self.deref() into the pipe function.§impl<T> Pointable for T
impl<T> Pointable for T
§impl<T> PolicyExt for Twhere
T: ?Sized,
impl<T> PolicyExt for Twhere
T: ?Sized,
§impl<T> Tap for T
impl<T> Tap for T
§fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Self
fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Self
Borrow<B> of a value. Read more§fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Self
fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Self
BorrowMut<B> of a value. Read more§fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Self
fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Self
AsRef<R> view of a value. Read more§fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Self
fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Self
AsMut<R> view of a value. Read more§fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Self
fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Self
Deref::Target of a value. Read more§fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Self
fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Self
Deref::Target of a value. Read more§fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
.tap() only in debug builds, and is erased in release builds.§fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
.tap_mut() only in debug builds, and is erased in release
builds.§fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Self
fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Self
.tap_borrow() only in debug builds, and is erased in release
builds.§fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Self
fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Self
.tap_borrow_mut() only in debug builds, and is erased in release
builds.§fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Self
fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Self
.tap_ref() only in debug builds, and is erased in release
builds.§fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Self
fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Self
.tap_ref_mut() only in debug builds, and is erased in release
builds.§fn tap_deref_dbg<T>(self, func: impl FnOnce(&T)) -> Self
fn tap_deref_dbg<T>(self, func: impl FnOnce(&T)) -> Self
.tap_deref() only in debug builds, and is erased in release
builds.