pub type BitSlice<T = u8> = BitSlice<T>;
Aliased Type§
struct BitSlice<T = u8> { /* private fields */ }
Implementations
§impl<T, O> BitSlice<T, O>where
T: BitStore + Radium,
O: BitOrder,
impl<T, O> BitSlice<T, O>where
T: BitStore + Radium,
O: BitOrder,
Methods available only when T
allows shared mutability.
pub fn set_aliased(&self, index: usize, value: bool)
pub fn set_aliased(&self, index: usize, value: bool)
Writes a new value into a single bit, using alias-safe operations.
This is equivalent to .set()
, except that it does not require an
&mut
reference, and allows bit-slices with alias-safe storage to share
write permissions.
§Parameters
&self
: This method only exists on bit-slices with alias-safe storage, and so does not require exclusive access.index
: The bit index to set. It must be in0 .. self.len()
.value
: The new bit-value to write into the bit atindex
.
§Panics
This panics if index
is out of bounds.
§Examples
use bitvec::prelude::*;
use core::cell::Cell;
let bits: &BitSlice<_, _> = bits![Cell<usize>, Lsb0; 0, 1];
bits.set_aliased(0, true);
bits.set_aliased(1, false);
assert_eq!(bits, bits![1, 0]);
pub unsafe fn set_aliased_unchecked(&self, index: usize, value: bool)
pub unsafe fn set_aliased_unchecked(&self, index: usize, value: bool)
Writes a new value into a single bit, using alias-safe operations and without bounds checking.
This is equivalent to .set_unchecked()
, except that it does not
require an &mut
reference, and allows bit-slices with alias-safe
storage to share write permissions.
§Parameters
&self
: This method only exists on bit-slices with alias-safe storage, and so does not require exclusive access.index
: The bit index to set. It must be in0 .. self.len()
.value
: The new bit-value to write into the bit atindex
.
§Safety
The caller must ensure that index
is not out of bounds.
§Examples
use bitvec::prelude::*;
use core::cell::Cell;
let data = Cell::new(0u8);
let bits = &data.view_bits::<Lsb0>()[.. 2];
unsafe {
bits.set_aliased_unchecked(3, true);
}
assert_eq!(data.get(), 8);
§impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
Alternates of standard APIs.
pub fn as_bitptr(&self) -> BitPtr<Const, T, O>
pub fn as_bitptr(&self) -> BitPtr<Const, T, O>
pub fn as_mut_bitptr(&mut self) -> BitPtr<Mut, T, O>
pub fn as_mut_bitptr(&mut self) -> BitPtr<Mut, T, O>
pub fn as_bitptr_range(&self) -> BitPtrRange<Const, T, O>
pub fn as_bitptr_range(&self) -> BitPtrRange<Const, T, O>
Views the bit-slice as a half-open range of bit-pointers, to its first bit in the bit-slice and first bit beyond it.
§Original
§API Differences
This is renamed to indicate that it returns a bitvec
structure, rather
than an ordinary Range
.
§Notes
BitSlice
does define a .as_ptr_range()
, which returns a
Range<BitPtr>
. BitPtrRange
has additional capabilities that
Range<*const T>
and Range<BitPtr>
do not.
pub fn as_mut_bitptr_range(&mut self) -> BitPtrRange<Mut, T, O>
pub fn as_mut_bitptr_range(&mut self) -> BitPtrRange<Mut, T, O>
Views the bit-slice as a half-open range of write-capable bit-pointers, to its first bit in the bit-slice and the first bit beyond it.
§Original
§API Differences
This is renamed to indicate that it returns a bitvec
structure, rather
than an ordinary Range
.
§Notes
BitSlice
does define a [.as_mut_ptr_range()
], which returns a
Range<BitPtr>
. BitPtrRange
has additional capabilities that
Range<*mut T>
and Range<BitPtr>
do not.
pub fn clone_from_bitslice<T2, O2>(&mut self, src: &BitSlice<T2, O2>)where
T2: BitStore,
O2: BitOrder,
pub fn clone_from_bitslice<T2, O2>(&mut self, src: &BitSlice<T2, O2>)where
T2: BitStore,
O2: BitOrder,
Copies the bits from src
into self
.
self
and src
must have the same length.
§Performance
If src
has the same type arguments as self
, it will use the same
implementation as .copy_from_bitslice()
; if you know that this will
always be the case, you should prefer to use that method directly.
Only .copy_from_bitslice()
is able to perform acceleration; this
method is always required to perform a bit-by-bit crawl over both
bit-slices.
§Original
§API Differences
This is renamed to reflect that it copies from another bit-slice, not from an element slice.
In order to support general usage, it allows src
to have different
type parameters than self
, at the cost of performance optimizations.
§Panics
This panics if the two bit-slices have different lengths.
§Examples
use bitvec::prelude::*;
pub fn copy_from_bitslice(&mut self, src: &BitSlice<T, O>)
pub fn copy_from_bitslice(&mut self, src: &BitSlice<T, O>)
pub fn swap_with_bitslice<T2, O2>(&mut self, other: &mut BitSlice<T2, O2>)where
T2: BitStore,
O2: BitOrder,
pub fn swap_with_bitslice<T2, O2>(&mut self, other: &mut BitSlice<T2, O2>)where
T2: BitStore,
O2: BitOrder,
Swaps the contents of two bit-slices.
self
and other
must have the same length.
§Original
§API Differences
This method is renamed, as it takes a bit-slice rather than an element slice.
§Panics
This panics if the two bit-slices have different lengths.
§Examples
use bitvec::prelude::*;
let mut one = [0xA5u8, 0x69];
let mut two = 0x1234u16;
let one_bits = one.view_bits_mut::<Msb0>();
let two_bits = two.view_bits_mut::<Lsb0>();
one_bits.swap_with_bitslice(two_bits);
assert_eq!(one, [0x2C, 0x48]);
assert_eq!(two, 0x96A5);
§impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
Bit-value queries.
pub fn count_ones(&self) -> usize
pub fn count_ones(&self) -> usize
Counts the number of bits set to 1
in the bit-slice contents.
§Examples
use bitvec::prelude::*;
let bits = bits![1, 1, 0, 0];
assert_eq!(bits[.. 2].count_ones(), 2);
assert_eq!(bits[2 ..].count_ones(), 0);
assert_eq!(bits![].count_ones(), 0);
pub fn count_zeros(&self) -> usize
pub fn count_zeros(&self) -> usize
Counts the number of bits cleared to 0
in the bit-slice contents.
§Examples
use bitvec::prelude::*;
let bits = bits![1, 1, 0, 0];
assert_eq!(bits[.. 2].count_zeros(), 0);
assert_eq!(bits[2 ..].count_zeros(), 2);
assert_eq!(bits![].count_zeros(), 0);
pub fn iter_ones(&self) -> IterOnes<'_, T, O>
pub fn iter_ones(&self) -> IterOnes<'_, T, O>
Enumerates the index of each bit in a bit-slice set to 1
.
This is a shorthand for a .enumerate().filter_map()
iterator that
selects the index of each true
bit; however, its implementation is
eligible for optimizations that the individual-bit iterator is not.
Specializations for the Lsb0
and Msb0
orderings allow processors
with instructions that seek particular bits within an element to operate
on whole elements, rather than on each bit individually.
§Examples
This example uses .iter_ones()
, a .filter_map()
that finds the index
of each set bit, and the known indices, in order to show that they have
equivalent behavior.
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1, 0, 0, 0, 1];
let iter_ones = bits.iter_ones();
let known_indices = [1, 4, 8].iter().copied();
let filter = bits.iter()
.by_vals()
.enumerate()
.filter_map(|(idx, bit)| if bit { Some(idx) } else { None });
let all = iter_ones.zip(known_indices).zip(filter);
for ((iter_one, known), filtered) in all {
assert_eq!(iter_one, known);
assert_eq!(known, filtered);
}
pub fn iter_zeros(&self) -> IterZeros<'_, T, O>
pub fn iter_zeros(&self) -> IterZeros<'_, T, O>
Enumerates the index of each bit in a bit-slice cleared to 0
.
This is a shorthand for a .enumerate().filter_map()
iterator that
selects the index of each false
bit; however, its implementation is
eligible for optimizations that the individual-bit iterator is not.
Specializations for the Lsb0
and Msb0
orderings allow processors
with instructions that seek particular bits within an element to operate
on whole elements, rather than on each bit individually.
§Examples
This example uses .iter_zeros()
, a .filter_map()
that finds the
index of each cleared bit, and the known indices, in order to show that
they have equivalent behavior.
use bitvec::prelude::*;
let bits = bits![1, 0, 1, 1, 0, 1, 1, 1, 0];
let iter_zeros = bits.iter_zeros();
let known_indices = [1, 4, 8].iter().copied();
let filter = bits.iter()
.by_vals()
.enumerate()
.filter_map(|(idx, bit)| if !bit { Some(idx) } else { None });
let all = iter_zeros.zip(known_indices).zip(filter);
for ((iter_zero, known), filtered) in all {
assert_eq!(iter_zero, known);
assert_eq!(known, filtered);
}
pub fn first_one(&self) -> Option<usize>
pub fn first_one(&self) -> Option<usize>
Finds the index of the first bit in the bit-slice set to 1
.
Returns None
if there is no true
bit in the bit-slice.
§Examples
use bitvec::prelude::*;
assert!(bits![].first_one().is_none());
assert!(bits![0].first_one().is_none());
assert_eq!(bits![0, 1].first_one(), Some(1));
pub fn first_zero(&self) -> Option<usize>
pub fn first_zero(&self) -> Option<usize>
Finds the index of the first bit in the bit-slice cleared to 0
.
Returns None
if there is no false
bit in the bit-slice.
§Examples
use bitvec::prelude::*;
assert!(bits![].first_zero().is_none());
assert!(bits![1].first_zero().is_none());
assert_eq!(bits![1, 0].first_zero(), Some(1));
pub fn last_one(&self) -> Option<usize>
pub fn last_one(&self) -> Option<usize>
Finds the index of the last bit in the bit-slice set to 1
.
Returns None
if there is no true
bit in the bit-slice.
§Examples
use bitvec::prelude::*;
assert!(bits![].last_one().is_none());
assert!(bits![0].last_one().is_none());
assert_eq!(bits![1, 0].last_one(), Some(0));
pub fn last_zero(&self) -> Option<usize>
pub fn last_zero(&self) -> Option<usize>
Finds the index of the last bit in the bit-slice cleared to 0
.
Returns None
if there is no false
bit in the bit-slice.
§Examples
use bitvec::prelude::*;
assert!(bits![].last_zero().is_none());
assert!(bits![1].last_zero().is_none());
assert_eq!(bits![0, 1].last_zero(), Some(0));
pub fn leading_ones(&self) -> usize
pub fn leading_ones(&self) -> usize
Counts the number of bits from the start of the bit-slice to the first
bit set to 0
.
This returns 0
if the bit-slice is empty.
§Examples
use bitvec::prelude::*;
assert_eq!(bits![].leading_ones(), 0);
assert_eq!(bits![0].leading_ones(), 0);
assert_eq!(bits![1, 0].leading_ones(), 1);
pub fn leading_zeros(&self) -> usize
pub fn leading_zeros(&self) -> usize
Counts the number of bits from the start of the bit-slice to the first
bit set to 1
.
This returns 0
if the bit-slice is empty.
§Examples
use bitvec::prelude::*;
assert_eq!(bits![].leading_zeros(), 0);
assert_eq!(bits![1].leading_zeros(), 0);
assert_eq!(bits![0, 1].leading_zeros(), 1);
pub fn trailing_ones(&self) -> usize
pub fn trailing_ones(&self) -> usize
Counts the number of bits from the end of the bit-slice to the last bit
set to 0
.
This returns 0
if the bit-slice is empty.
§Examples
use bitvec::prelude::*;
assert_eq!(bits![].trailing_ones(), 0);
assert_eq!(bits![0].trailing_ones(), 0);
assert_eq!(bits![0, 1].trailing_ones(), 1);
pub fn trailing_zeros(&self) -> usize
pub fn trailing_zeros(&self) -> usize
Counts the number of bits from the end of the bit-slice to the last bit
set to 1
.
This returns 0
if the bit-slice is empty.
§Examples
use bitvec::prelude::*;
assert_eq!(bits![].trailing_zeros(), 0);
assert_eq!(bits![1].trailing_zeros(), 0);
assert_eq!(bits![1, 0].trailing_zeros(), 1);
pub fn any(&self) -> bool
pub fn any(&self) -> bool
Tests if there is at least one bit set to 1
in the bit-slice.
Returns false
when self
is empty.
§Examples
use bitvec::prelude::*;
assert!(!bits![].any());
assert!(!bits![0].any());
assert!(bits![0, 1].any());
pub fn all(&self) -> bool
pub fn all(&self) -> bool
Tests if every bit is set to 1
in the bit-slice.
Returns true
when self
is empty.
§Examples
use bitvec::prelude::*;
assert!( bits![].all());
assert!(!bits![0].all());
assert!( bits![1].all());
pub fn not_any(&self) -> bool
pub fn not_any(&self) -> bool
Tests if every bit is cleared to 0
in the bit-slice.
Returns true
when self
is empty.
§Examples
use bitvec::prelude::*;
assert!( bits![].not_any());
assert!(!bits![1].not_any());
assert!( bits![0].not_any());
§impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
Buffer manipulation.
pub fn shift_left(&mut self, by: usize)
pub fn shift_left(&mut self, by: usize)
Shifts the contents of a bit-slice “left” (towards the zero-index),
clearing the “right” bits to 0
.
This is a strictly-worse analogue to taking bits = &bits[by ..]
: it
has to modify the entire memory region that bits
governs, and destroys
contained information. Unless the actual memory layout and contents of
your bit-slice matters to your program, you should probably prefer to
munch your way forward through a bit-slice handle.
Note also that the “left” here is semantic only, and does not necessarily correspond to a left-shift instruction applied to the underlying integer storage.
This has no effect when by
is 0
. When by
is self.len()
, the
bit-slice is entirely cleared to 0
.
§Panics
This panics if by
is not less than self.len()
.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1];
// these bits are retained ^--------------------------^
bits.shift_left(2);
assert_eq!(bits, bits![1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0]);
// and move here ^--------------------------^
let bits = bits![mut 1; 2];
bits.shift_left(2);
assert_eq!(bits, bits![0; 2]);
pub fn shift_right(&mut self, by: usize)
pub fn shift_right(&mut self, by: usize)
Shifts the contents of a bit-slice “right” (away from the zero-index),
clearing the “left” bits to 0
.
This is a strictly-worse analogue to taking `bits = &bits[.. bits.len()
- by]
: it must modify the entire memory region that
bits` governs, and destroys contained information. Unless the actual memory layout and contents of your bit-slice matters to your program, you should probably prefer to munch your way backward through a bit-slice handle.
Note also that the “right” here is semantic only, and does not necessarily correspond to a right-shift instruction applied to the underlying integer storage.
This has no effect when by
is 0
. When by
is self.len()
, the
bit-slice is entirely cleared to 0
.
§Panics
This panics if by
is not less than self.len()
.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1];
// these bits stay ^--------------------------^
bits.shift_right(2);
assert_eq!(bits, bits![0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1]);
// and move here ^--------------------------^
let bits = bits![mut 1; 2];
bits.shift_right(2);
assert_eq!(bits, bits![0; 2]);
§impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
Constructors.
pub fn from_element(elem: &T) -> &BitSlice<T, O> ⓘ
pub fn from_element(elem: &T) -> &BitSlice<T, O> ⓘ
Constructs a shared &BitSlice
reference over a shared element.
The BitView
trait, implemented on all BitStore
implementors,
provides a .view_bits::<O>()
method which delegates to this function
and may be more convenient for you to write.
§Parameters
elem
: A shared reference to a memory element.
§Returns
A shared &BitSlice
over elem
.
§Examples
use bitvec::prelude::*;
let elem = 0u8;
let bits = BitSlice::<_, Lsb0>::from_element(&elem);
assert_eq!(bits.len(), 8);
let bits = elem.view_bits::<Lsb0>();
pub fn from_element_mut(elem: &mut T) -> &mut BitSlice<T, O> ⓘ
pub fn from_element_mut(elem: &mut T) -> &mut BitSlice<T, O> ⓘ
Constructs an exclusive &mut BitSlice
reference over an element.
The BitView
trait, implemented on all BitStore
implementors,
provides a .view_bits_mut::<O>()
method which delegates to this
function and may be more convenient for you to write.
§Parameters
elem
: An exclusive reference to a memory element.
§Returns
An exclusive &mut BitSlice
over elem
.
Note that the original elem
reference will be inaccessible for the
duration of the returned bit-slice handle’s lifetime.
§Examples
use bitvec::prelude::*;
let mut elem = 0u8;
let bits = BitSlice::<_, Lsb0>::from_element_mut(&mut elem);
bits.set(1, true);
assert!(bits[1]);
assert_eq!(elem, 2);
let bits = elem.view_bits_mut::<Lsb0>();
pub fn from_slice(slice: &[T]) -> &BitSlice<T, O> ⓘ
pub fn from_slice(slice: &[T]) -> &BitSlice<T, O> ⓘ
Constructs a shared &BitSlice
reference over a slice of elements.
The BitView
trait, implemented on all [T]
slices, provides a
.view_bits::<O>()
method which delegates to this function and may be
more convenient for you to write.
§Parameters
slice
: A shared reference to a slice of memory elements.
§Returns
A shared BitSlice
reference over all of slice
.
§Panics
This will panic if slice
is too long to encode as a bit-slice view.
§Examples
use bitvec::prelude::*;
let data = [0u16, 1];
let bits = BitSlice::<_, Lsb0>::from_slice(&data);
assert!(bits[16]);
let bits = data.view_bits::<Lsb0>();
pub fn try_from_slice(slice: &[T]) -> Result<&BitSlice<T, O>, BitSpanError<T>>
pub fn try_from_slice(slice: &[T]) -> Result<&BitSlice<T, O>, BitSpanError<T>>
Attempts to construct a shared &BitSlice
reference over a slice of
elements.
The BitView
, implemented on all [T]
slices, provides a
.try_view_bits::<O>()
method which delegates to this function and
may be more convenient for you to write.
This is very hard, if not impossible, to cause to fail. Rust will not create excessive arrays on 64-bit architectures.
§Parameters
slice
: A shared reference to a slice of memory elements.
§Returns
A shared &BitSlice
over slice
. If slice
is longer than can be
encoded into a &BitSlice
(see MAX_ELTS
), this will fail and return
the original slice
as an error.
§Examples
use bitvec::prelude::*;
let data = [0u8, 1];
let bits = BitSlice::<_, Msb0>::try_from_slice(&data).unwrap();
assert!(bits[15]);
let bits = data.try_view_bits::<Msb0>().unwrap();
pub fn from_slice_mut(slice: &mut [T]) -> &mut BitSlice<T, O> ⓘ
pub fn from_slice_mut(slice: &mut [T]) -> &mut BitSlice<T, O> ⓘ
Constructs an exclusive &mut BitSlice
reference over a slice of
elements.
The BitView
trait, implemented on all [T]
slices, provides a
.view_bits_mut::<O>()
method which delegates to this function and
may be more convenient for you to write.
§Parameters
slice
: An exclusive reference to a slice of memory elements.
§Returns
An exclusive &mut BitSlice
over all of slice
.
§Panics
This panics if slice
is too long to encode as a bit-slice view.
§Examples
use bitvec::prelude::*;
let mut data = [0u16; 2];
let bits = BitSlice::<_, Lsb0>::from_slice_mut(&mut data);
bits.set(0, true);
bits.set(17, true);
assert_eq!(data, [1, 2]);
let bits = data.view_bits_mut::<Lsb0>();
pub fn try_from_slice_mut(
slice: &mut [T]
) -> Result<&mut BitSlice<T, O>, BitSpanError<T>>
pub fn try_from_slice_mut( slice: &mut [T] ) -> Result<&mut BitSlice<T, O>, BitSpanError<T>>
Attempts to construct an exclusive &mut BitSlice
reference over a
slice of elements.
The BitView
trait, implemented on all [T]
slices, provides a
.try_view_bits_mut::<O>()
method which delegates to this function
and may be more convenient for you to write.
§Parameters
slice
: An exclusive reference to a slice of memory elements.
§Returns
An exclusive &mut BitSlice
over slice
. If slice
is longer than can
be encoded into a &mut BitSlice
(see MAX_ELTS
), this will fail and
return the original slice
as an error.
§Examples
use bitvec::prelude::*;
let mut data = [0u8; 2];
let bits = BitSlice::<_, Msb0>::try_from_slice_mut(&mut data).unwrap();
bits.set(7, true);
bits.set(15, true);
assert_eq!(data, [1; 2]);
let bits = data.try_view_bits_mut::<Msb0>().unwrap();
pub unsafe fn from_slice_unchecked(slice: &[T]) -> &BitSlice<T, O> ⓘ
pub unsafe fn from_slice_unchecked(slice: &[T]) -> &BitSlice<T, O> ⓘ
Constructs a shared &BitSlice
over an element slice, without checking
its length.
If slice
is too long to encode into a &BitSlice
, then the produced
bit-slice’s length is unspecified.
§Safety
You must ensure that slice.len() < BitSlice::MAX_ELTS
.
Calling this function with an over-long slice is library-level undefined behavior. You may not assume anything about its implementation or behavior, and must conservatively assume that over-long slices cause compiler UB.
pub unsafe fn from_slice_unchecked_mut(slice: &mut [T]) -> &mut BitSlice<T, O> ⓘ
pub unsafe fn from_slice_unchecked_mut(slice: &mut [T]) -> &mut BitSlice<T, O> ⓘ
Constructs an exclusive &mut BitSlice
over an element slice, without
checking its length.
If slice
is too long to encode into a &mut BitSlice
, then the
produced bit-slice’s length is unspecified.
§Safety
You must ensure that slice.len() < BitSlice::MAX_ELTS
.
Calling this function with an over-long slice is library-level undefined behavior. You may not assume anything about its implementation or behavior, and must conservatively assume that over-long slices cause compiler UB.
§impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
Extensions of standard APIs.
pub fn set(&mut self, index: usize, value: bool)
pub fn set(&mut self, index: usize, value: bool)
Writes a new value into a single bit.
This is the replacement for *slice[index] = value;
, as bitvec
is not
able to express that under the current IndexMut
API signature.
§Parameters
&mut self
index
: The bit-index to set. It must be in0 .. self.len()
.value
: The new bit-value to write into the bit atindex
.
§Panics
This panics if index
is out of bounds.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 1];
bits.set(0, true);
bits.set(1, false);
assert_eq!(bits, bits![1, 0]);
pub unsafe fn set_unchecked(&mut self, index: usize, value: bool)
pub unsafe fn set_unchecked(&mut self, index: usize, value: bool)
Writes a new value into a single bit, without bounds checking.
§Parameters
&mut self
index
: The bit-index to set. It must be in0 .. self.len()
.value
: The new bit-value to write into the bit atindex
.
§Safety
You must ensure that index
is in the range 0 .. self.len()
.
This performs bit-pointer offset arithmetic without doing any bounds
checks. If index
is out of bounds, then this will issue an
out-of-bounds access and will trigger memory unsafety.
§Examples
use bitvec::prelude::*;
let mut data = 0u8;
let bits = &mut data.view_bits_mut::<Lsb0>()[.. 2];
assert_eq!(bits.len(), 2);
unsafe {
bits.set_unchecked(3, true);
}
assert_eq!(data, 8);
pub unsafe fn replace_unchecked(&mut self, index: usize, value: bool) -> bool
pub unsafe fn replace_unchecked(&mut self, index: usize, value: bool) -> bool
Writes a new value into a bit, returning the previous value, without bounds checking.
§Safety
index
must be less than self.len()
.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0];
let old = unsafe {
let a = &mut bits[.. 1];
a.replace_unchecked(1, true)
};
assert!(!old);
assert!(bits[1]);
pub unsafe fn swap_unchecked(&mut self, a: usize, b: usize)
pub unsafe fn swap_unchecked(&mut self, a: usize, b: usize)
Swaps two bits in a bit-slice, without bounds checking.
See .swap()
for documentation.
§Safety
You must ensure that a
and b
are both in the range 0 .. self.len()
.
This method performs bit-pointer offset arithmetic without doing any
bounds checks. If a
or b
are out of bounds, then this will issue an
out-of-bounds access and will trigger memory unsafety.
pub unsafe fn split_at_unchecked(
&self,
mid: usize
) -> (&BitSlice<T, O>, &BitSlice<T, O>)
pub unsafe fn split_at_unchecked( &self, mid: usize ) -> (&BitSlice<T, O>, &BitSlice<T, O>)
Splits a bit-slice at an index, without bounds checking.
See .split_at()
for documentation.
§Safety
You must ensure that mid
is in the range 0 ..= self.len()
.
This method produces new bit-slice references. If mid
is out of
bounds, its behavior is library-level undefined. You must
conservatively assume that an out-of-bounds split point produces
compiler-level UB.
pub unsafe fn split_at_unchecked_mut(
&mut self,
mid: usize
) -> (&mut BitSlice<<T as BitStore>::Alias, O>, &mut BitSlice<<T as BitStore>::Alias, O>)
pub unsafe fn split_at_unchecked_mut( &mut self, mid: usize ) -> (&mut BitSlice<<T as BitStore>::Alias, O>, &mut BitSlice<<T as BitStore>::Alias, O>)
Splits a mutable bit-slice at an index, without bounds checking.
See .split_at_mut()
for documentation.
§Safety
You must ensure that mid
is in the range 0 ..= self.len()
.
This method produces new bit-slice references. If mid
is out of
bounds, its behavior is library-level undefined. You must
conservatively assume that an out-of-bounds split point produces
compiler-level UB.
pub unsafe fn copy_within_unchecked<R>(&mut self, src: R, dest: usize)where
R: RangeExt<usize>,
pub unsafe fn copy_within_unchecked<R>(&mut self, src: R, dest: usize)where
R: RangeExt<usize>,
Copies bits from one region of the bit-slice to another region of itself, without doing bounds checks.
The regions are allowed to overlap.
§Parameters
&mut self
src
: The range withinself
from which to copy.dst
: The starting index withinself
at which to paste.
§Effects
self[src]
is copied to self[dest .. dest + src.len()]
. The bits of
self[src]
are in an unspecified, but initialized, state.
§Safety
src.end()
and dest + src.len()
must be entirely within bounds.
§Examples
use bitvec::prelude::*;
let mut data = 0b1011_0000u8;
let bits = data.view_bits_mut::<Msb0>();
unsafe {
bits.copy_within_unchecked(.. 4, 2);
}
assert_eq!(data, 0b1010_1100);
§impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
Miscellaneous information.
pub const MAX_BITS: usize = 2_305_843_009_213_693_951usize
pub const MAX_BITS: usize = 2_305_843_009_213_693_951usize
The inclusive maximum length of a BitSlice<_, T>
.
As BitSlice
is zero-indexed, the largest possible index is one less
than this value.
CPU word width | Value |
---|---|
32 bits | 0x1fff_ffff |
64 bits | 0x1fff_ffff_ffff_ffff |
pub const MAX_ELTS: usize = BitSpan<Const, T, O>::REGION_MAX_ELTS
pub const MAX_ELTS: usize = BitSpan<Const, T, O>::REGION_MAX_ELTS
The inclusive maximum length that a [T]
slice can be for
BitSlice<_, T>
to cover it.
A BitSlice<_, T>
that begins in the interior of an element and
contains the maximum number of bits will extend one element past the
cutoff that would occur if the bit-slice began at the zeroth bit. Such a
bit-slice is difficult to manually construct, but would not otherwise
fail.
Type Bits | Max Elements (32-bit) | Max Elements (64-bit) |
---|---|---|
8 | 0x0400_0001 | 0x0400_0000_0000_0001 |
16 | 0x0200_0001 | 0x0200_0000_0000_0001 |
32 | 0x0100_0001 | 0x0100_0000_0000_0001 |
64 | 0x0080_0001 | 0x0080_0000_0000_0001 |
§impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
Port of the [T]
inherent API.
pub fn len(&self) -> usize
pub fn len(&self) -> usize
pub fn is_empty(&self) -> bool
pub fn is_empty(&self) -> bool
pub fn first(&self) -> Option<BitRef<'_, Const, T, O>>
pub fn first(&self) -> Option<BitRef<'_, Const, T, O>>
Gets a reference to the first bit of the bit-slice, or None
if it is
empty.
§Original
§API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
.
§Examples
use bitvec::prelude::*;
let bits = bits![1, 0, 0];
assert_eq!(bits.first().as_deref(), Some(&true));
assert!(bits![].first().is_none());
pub fn first_mut(&mut self) -> Option<BitRef<'_, Mut, T, O>>
pub fn first_mut(&mut self) -> Option<BitRef<'_, Mut, T, O>>
Gets a mutable reference to the first bit of the bit-slice, or None
if
it is empty.
§Original
§API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
. This must be bound as mut
in order to write
through it.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 3];
if let Some(mut first) = bits.first_mut() {
*first = true;
}
assert_eq!(bits, bits![1, 0, 0]);
assert!(bits![mut].first_mut().is_none());
pub fn split_first(&self) -> Option<(BitRef<'_, Const, T, O>, &BitSlice<T, O>)>
pub fn split_first(&self) -> Option<(BitRef<'_, Const, T, O>, &BitSlice<T, O>)>
Splits the bit-slice into a reference to its first bit, and the rest of
the bit-slice. Returns None
when empty.
§Original
§API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
.
§Examples
use bitvec::prelude::*;
let bits = bits![1, 0, 0];
let (first, rest) = bits.split_first().unwrap();
assert_eq!(first, &true);
assert_eq!(rest, bits![0; 2]);
pub fn split_first_mut(
&mut self
) -> Option<(BitRef<'_, Mut, <T as BitStore>::Alias, O>, &mut BitSlice<<T as BitStore>::Alias, O>)>
pub fn split_first_mut( &mut self ) -> Option<(BitRef<'_, Mut, <T as BitStore>::Alias, O>, &mut BitSlice<<T as BitStore>::Alias, O>)>
Splits the bit-slice into mutable references of its first bit, and the
rest of the bit-slice. Returns None
when empty.
§Original
§API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
. This must be bound as mut
in order to write
through it.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 3];
if let Some((mut first, rest)) = bits.split_first_mut() {
*first = true;
assert_eq!(rest, bits![0; 2]);
}
assert_eq!(bits, bits![1, 0, 0]);
pub fn split_last(&self) -> Option<(BitRef<'_, Const, T, O>, &BitSlice<T, O>)>
pub fn split_last(&self) -> Option<(BitRef<'_, Const, T, O>, &BitSlice<T, O>)>
Splits the bit-slice into a reference to its last bit, and the rest of
the bit-slice. Returns None
when empty.
§Original
§API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 1];
let (last, rest) = bits.split_last().unwrap();
assert_eq!(last, &true);
assert_eq!(rest, bits![0; 2]);
pub fn split_last_mut(
&mut self
) -> Option<(BitRef<'_, Mut, <T as BitStore>::Alias, O>, &mut BitSlice<<T as BitStore>::Alias, O>)>
pub fn split_last_mut( &mut self ) -> Option<(BitRef<'_, Mut, <T as BitStore>::Alias, O>, &mut BitSlice<<T as BitStore>::Alias, O>)>
Splits the bit-slice into mutable references to its last bit, and the
rest of the bit-slice. Returns None
when empty.
§Original
§API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
. This must be bound as mut
in order to write
through it.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 3];
if let Some((mut last, rest)) = bits.split_last_mut() {
*last = true;
assert_eq!(rest, bits![0; 2]);
}
assert_eq!(bits, bits![0, 0, 1]);
pub fn last(&self) -> Option<BitRef<'_, Const, T, O>>
pub fn last(&self) -> Option<BitRef<'_, Const, T, O>>
Gets a reference to the last bit of the bit-slice, or None
if it is
empty.
§Original
§API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 1];
assert_eq!(bits.last().as_deref(), Some(&true));
assert!(bits![].last().is_none());
pub fn last_mut(&mut self) -> Option<BitRef<'_, Mut, T, O>>
pub fn last_mut(&mut self) -> Option<BitRef<'_, Mut, T, O>>
Gets a mutable reference to the last bit of the bit-slice, or None
if
it is empty.
§Original
§API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
. This must be bound as mut
in order to write
through it.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 3];
if let Some(mut last) = bits.last_mut() {
*last = true;
}
assert_eq!(bits, bits![0, 0, 1]);
assert!(bits![mut].last_mut().is_none());
pub fn get<'a, I>(
&'a self,
index: I
) -> Option<<I as BitSliceIndex<'a, T, O>>::Immut>where
I: BitSliceIndex<'a, T, O>,
pub fn get<'a, I>(
&'a self,
index: I
) -> Option<<I as BitSliceIndex<'a, T, O>>::Immut>where
I: BitSliceIndex<'a, T, O>,
Gets a reference to a single bit or a subsection of the bit-slice,
depending on the type of index
.
- If given a
usize
, this produces a reference structure to thebool
at the position. - If given any form of range, this produces a smaller bit-slice.
This returns None
if the index
departs the bounds of self
.
§Original
§API Differences
BitSliceIndex
uses discrete types for immutable and mutable
references, rather than a single referent type.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0];
assert_eq!(bits.get(1).as_deref(), Some(&true));
assert_eq!(bits.get(0 .. 2), Some(bits![0, 1]));
assert!(bits.get(3).is_none());
assert!(bits.get(0 .. 4).is_none());
pub fn get_mut<'a, I>(
&'a mut self,
index: I
) -> Option<<I as BitSliceIndex<'a, T, O>>::Mut>where
I: BitSliceIndex<'a, T, O>,
pub fn get_mut<'a, I>(
&'a mut self,
index: I
) -> Option<<I as BitSliceIndex<'a, T, O>>::Mut>where
I: BitSliceIndex<'a, T, O>,
Gets a mutable reference to a single bit or a subsection of the
bit-slice, depending on the type of index
.
- If given a
usize
, this produces a reference structure to thebool
at the position. - If given any form of range, this produces a smaller bit-slice.
This returns None
if the index
departs the bounds of self
.
§Original
§API Differences
BitSliceIndex
uses discrete types for immutable and mutable
references, rather than a single referent type.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 3];
*bits.get_mut(0).unwrap() = true;
bits.get_mut(1 ..).unwrap().fill(true);
assert_eq!(bits, bits![1; 3]);
pub unsafe fn get_unchecked<'a, I>(
&'a self,
index: I
) -> <I as BitSliceIndex<'a, T, O>>::Immutwhere
I: BitSliceIndex<'a, T, O>,
pub unsafe fn get_unchecked<'a, I>(
&'a self,
index: I
) -> <I as BitSliceIndex<'a, T, O>>::Immutwhere
I: BitSliceIndex<'a, T, O>,
Gets a reference to a single bit or to a subsection of the bit-slice, without bounds checking.
This has the same arguments and behavior as .get()
, except that it
does not check that index
is in bounds.
§Original
§Safety
You must ensure that index
is within bounds (within the range 0 .. self.len()
), or this method will introduce memory safety and/or
undefined behavior.
It is library-level undefined behavior to index beyond the length of any bit-slice, even if you know that the offset remains within an allocation as measured by Rust or LLVM.
§Examples
use bitvec::prelude::*;
let data = 0b0001_0010u8;
let bits = &data.view_bits::<Lsb0>()[.. 3];
unsafe {
assert!(bits.get_unchecked(1));
assert!(bits.get_unchecked(4));
}
pub unsafe fn get_unchecked_mut<'a, I>(
&'a mut self,
index: I
) -> <I as BitSliceIndex<'a, T, O>>::Mutwhere
I: BitSliceIndex<'a, T, O>,
pub unsafe fn get_unchecked_mut<'a, I>(
&'a mut self,
index: I
) -> <I as BitSliceIndex<'a, T, O>>::Mutwhere
I: BitSliceIndex<'a, T, O>,
Gets a mutable reference to a single bit or a subsection of the
bit-slice, depending on the type of index
.
This has the same arguments and behavior as .get_mut()
, except that
it does not check that index
is in bounds.
§Original
§Safety
You must ensure that index
is within bounds (within the range 0 .. self.len()
), or this method will introduce memory safety and/or
undefined behavior.
It is library-level undefined behavior to index beyond the length of any bit-slice, even if you know that the offset remains within an allocation as measured by Rust or LLVM.
§Examples
use bitvec::prelude::*;
let mut data = 0u8;
let bits = &mut data.view_bits_mut::<Lsb0>()[.. 3];
unsafe {
bits.get_unchecked_mut(1).commit(true);
bits.get_unchecked_mut(4 .. 6).fill(true);
}
assert_eq!(data, 0b0011_0010);
pub fn as_ptr(&self) -> BitPtr<Const, T, O>
.as_bitptr()
insteadpub fn as_mut_ptr(&mut self) -> BitPtr<Mut, T, O>
.as_mut_bitptr()
insteadpub fn as_ptr_range(&self) -> Range<BitPtr<Const, T, O>>
pub fn as_ptr_range(&self) -> Range<BitPtr<Const, T, O>>
Produces a range of bit-pointers to each bit in the bit-slice.
This is a standard-library range, which has no real functionality for
pointer types. You should prefer .as_bitptr_range()
instead, as it
produces a custom structure that provides expected ranging
functionality.
§Original
pub fn as_mut_ptr_range(&mut self) -> Range<BitPtr<Mut, T, O>>
pub fn as_mut_ptr_range(&mut self) -> Range<BitPtr<Mut, T, O>>
Produces a range of mutable bit-pointers to each bit in the bit-slice.
This is a standard-library range, which has no real functionality for
pointer types. You should prefer .as_mut_bitptr_range()
instead, as
it produces a custom structure that provides expected ranging
functionality.
§Original
pub fn swap(&mut self, a: usize, b: usize)
pub fn swap(&mut self, a: usize, b: usize)
pub fn reverse(&mut self)
pub fn reverse(&mut self)
pub fn iter(&self) -> Iter<'_, T, O>
pub fn iter(&self) -> Iter<'_, T, O>
Produces an iterator over each bit in the bit-slice.
§Original
§API Differences
This iterator yields proxy-reference structures, not &bool
. It can be
adapted to yield &bool
with the .by_refs()
method, or bool
with
.by_vals()
.
This iterator, and its adapters, are fast. Do not try to be more clever
than them by abusing .as_bitptr_range()
.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 1];
let mut iter = bits.iter();
assert!(!iter.next().unwrap());
assert!( iter.next().unwrap());
assert!( iter.next_back().unwrap());
assert!(!iter.next_back().unwrap());
assert!( iter.next().is_none());
pub fn iter_mut(&mut self) -> IterMut<'_, T, O>
pub fn iter_mut(&mut self) -> IterMut<'_, T, O>
Produces a mutable iterator over each bit in the bit-slice.
§Original
§API Differences
This iterator yields proxy-reference structures, not &mut bool
. In
addition, it marks each proxy as alias-tainted.
If you are using this in an ordinary loop and not keeping multiple
yielded proxy-references alive at the same scope, you may use the
.remove_alias()
adapter to undo the alias marking.
This iterator is fast. Do not try to be more clever than it by abusing
.as_mut_bitptr_range()
.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 4];
let mut iter = bits.iter_mut();
iter.nth(1).unwrap().commit(true); // index 1
iter.next_back().unwrap().commit(true); // index 3
assert!(iter.next().is_some()); // index 2
assert!(iter.next().is_none()); // complete
assert_eq!(bits, bits![0, 1, 0, 1]);
pub fn windows(&self, size: usize) -> Windows<'_, T, O>
pub fn windows(&self, size: usize) -> Windows<'_, T, O>
Iterates over consecutive windowing subslices in a bit-slice.
Windows are overlapping views of the bit-slice. Each window advances one
bit from the previous, so in a bit-slice [A, B, C, D, E]
, calling
.windows(3)
will yield [A, B, C]
, [B, C, D]
, and [C, D, E]
.
§Original
§Panics
This panics if size
is 0
.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1];
let mut iter = bits.windows(3);
assert_eq!(iter.next(), Some(bits![0, 1, 0]));
assert_eq!(iter.next(), Some(bits![1, 0, 0]));
assert_eq!(iter.next(), Some(bits![0, 0, 1]));
assert!(iter.next().is_none());
pub fn chunks(&self, chunk_size: usize) -> Chunks<'_, T, O>
pub fn chunks(&self, chunk_size: usize) -> Chunks<'_, T, O>
Iterates over non-overlapping subslices of a bit-slice.
Unlike .windows()
, the subslices this yields do not overlap with each
other. If self.len()
is not an even multiple of chunk_size
, then the
last chunk yielded will be shorter.
§Original
§Sibling Methods
.chunks_mut()
has the same division logic, but each yielded bit-slice is mutable..chunks_exact()
does not yield the final chunk if it is shorter thanchunk_size
..rchunks()
iterates from the back of the bit-slice to the front, with the final, possibly-shorter, segment at the front edge.
§Panics
This panics if chunk_size
is 0
.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1];
let mut iter = bits.chunks(2);
assert_eq!(iter.next(), Some(bits![0, 1]));
assert_eq!(iter.next(), Some(bits![0, 0]));
assert_eq!(iter.next(), Some(bits![1]));
assert!(iter.next().is_none());
pub fn chunks_mut(&mut self, chunk_size: usize) -> ChunksMut<'_, T, O>
pub fn chunks_mut(&mut self, chunk_size: usize) -> ChunksMut<'_, T, O>
Iterates over non-overlapping mutable subslices of a bit-slice.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
§Original
§Sibling Methods
.chunks()
has the same division logic, but each yielded bit-slice is immutable..chunks_exact_mut()
does not yield the final chunk if it is shorter thanchunk_size
..rchunks_mut()
iterates from the back of the bit-slice to the front, with the final, possibly-shorter, segment at the front edge.
§Panics
This panics if chunk_size
is 0
.
§Examples
use bitvec::prelude::*;
let bits = bits![mut u8, Msb0; 0; 5];
for (idx, chunk) in unsafe {
bits.chunks_mut(2).remove_alias()
}.enumerate() {
chunk.store(idx + 1);
}
assert_eq!(bits, bits![0, 1, 1, 0, 1]);
// ^^^^ ^^^^ ^
pub fn chunks_exact(&self, chunk_size: usize) -> ChunksExact<'_, T, O>
pub fn chunks_exact(&self, chunk_size: usize) -> ChunksExact<'_, T, O>
Iterates over non-overlapping subslices of a bit-slice.
If self.len()
is not an even multiple of chunk_size
, then the last
few bits are not yielded by the iterator at all. They can be accessed
with the .remainder()
method if the iterator is bound to a name.
§Original
§Sibling Methods
.chunks()
yields any leftover bits at the end as a shorter chunk during iteration..chunks_exact_mut()
has the same division logic, but each yielded bit-slice is mutable..rchunks_exact()
iterates from the back of the bit-slice to the front, with the unyielded remainder segment at the front edge.
§Panics
This panics if chunk_size
is 0
.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1];
let mut iter = bits.chunks_exact(2);
assert_eq!(iter.next(), Some(bits![0, 1]));
assert_eq!(iter.next(), Some(bits![0, 0]));
assert!(iter.next().is_none());
assert_eq!(iter.remainder(), bits![1]);
pub fn chunks_exact_mut(
&mut self,
chunk_size: usize
) -> ChunksExactMut<'_, T, O>
pub fn chunks_exact_mut( &mut self, chunk_size: usize ) -> ChunksExactMut<'_, T, O>
Iterates over non-overlapping mutable subslices of a bit-slice.
If self.len()
is not an even multiple of chunk_size
, then the last
few bits are not yielded by the iterator at all. They can be accessed
with the .into_remainder()
method if the iterator is bound to a
name.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
§Original
§Sibling Methods
.chunks_mut()
yields any leftover bits at the end as a shorter chunk during iteration..chunks_exact()
has the same division logic, but each yielded bit-slice is immutable..rchunks_exact_mut()
iterates from the back of the bit-slice forwards, with the unyielded remainder segment at the front edge.
§Panics
This panics if chunk_size
is 0
.
§Examples
use bitvec::prelude::*;
let bits = bits![mut u8, Msb0; 0; 5];
let mut iter = bits.chunks_exact_mut(2);
for (idx, chunk) in iter.by_ref().enumerate() {
chunk.store(idx + 1);
}
iter.into_remainder().store(1u8);
assert_eq!(bits, bits![0, 1, 1, 0, 1]);
// remainder ^
pub fn rchunks(&self, chunk_size: usize) -> RChunks<'_, T, O>
pub fn rchunks(&self, chunk_size: usize) -> RChunks<'_, T, O>
Iterates over non-overlapping subslices of a bit-slice, from the back edge.
Unlike .chunks()
, this aligns its chunks to the back edge of self
.
If self.len()
is not an even multiple of chunk_size
, then the
leftover partial chunk is self[0 .. len % chunk_size]
.
§Original
§Sibling Methods
.rchunks_mut()
has the same division logic, but each yielded bit-slice is mutable..rchunks_exact()
does not yield the final chunk if it is shorter thanchunk_size
..chunks()
iterates from the front of the bit-slice to the back, with the final, possibly-shorter, segment at the back edge.
§Panics
This panics if chunk_size
is 0
.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1];
let mut iter = bits.rchunks(2);
assert_eq!(iter.next(), Some(bits![0, 1]));
assert_eq!(iter.next(), Some(bits![1, 0]));
assert_eq!(iter.next(), Some(bits![0]));
assert!(iter.next().is_none());
pub fn rchunks_mut(&mut self, chunk_size: usize) -> RChunksMut<'_, T, O>
pub fn rchunks_mut(&mut self, chunk_size: usize) -> RChunksMut<'_, T, O>
Iterates over non-overlapping mutable subslices of a bit-slice, from the back edge.
Unlike .chunks_mut()
, this aligns its chunks to the back edge of
self
. If self.len()
is not an even multiple of chunk_size
, then
the leftover partial chunk is self[0 .. len % chunk_size]
.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded values for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
§Original
§Sibling Methods
.rchunks()
has the same division logic, but each yielded bit-slice is immutable..rchunks_exact_mut()
does not yield the final chunk if it is shorter thanchunk_size
..chunks_mut()
iterates from the front of the bit-slice to the back, with the final, possibly-shorter, segment at the back edge.
§Examples
use bitvec::prelude::*;
let bits = bits![mut u8, Msb0; 0; 5];
for (idx, chunk) in unsafe {
bits.rchunks_mut(2).remove_alias()
}.enumerate() {
chunk.store(idx + 1);
}
assert_eq!(bits, bits![1, 1, 0, 0, 1]);
// remainder ^ ^^^^ ^^^^
pub fn rchunks_exact(&self, chunk_size: usize) -> RChunksExact<'_, T, O>
pub fn rchunks_exact(&self, chunk_size: usize) -> RChunksExact<'_, T, O>
Iterates over non-overlapping subslices of a bit-slice, from the back edge.
If self.len()
is not an even multiple of chunk_size
, then the first
few bits are not yielded by the iterator at all. They can be accessed
with the .remainder()
method if the iterator is bound to a name.
§Original
§Sibling Methods
.rchunks()
yields any leftover bits at the front as a shorter chunk during iteration..rchunks_exact_mut()
has the same division logic, but each yielded bit-slice is mutable..chunks_exact()
iterates from the front of the bit-slice to the back, with the unyielded remainder segment at the back edge.
§Panics
This panics if chunk_size
is 0
.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1];
let mut iter = bits.rchunks_exact(2);
assert_eq!(iter.next(), Some(bits![0, 1]));
assert_eq!(iter.next(), Some(bits![1, 0]));
assert!(iter.next().is_none());
assert_eq!(iter.remainder(), bits![0]);
pub fn rchunks_exact_mut(
&mut self,
chunk_size: usize
) -> RChunksExactMut<'_, T, O>
pub fn rchunks_exact_mut( &mut self, chunk_size: usize ) -> RChunksExactMut<'_, T, O>
Iterates over non-overlapping mutable subslices of a bit-slice, from the back edge.
If self.len()
is not an even multiple of chunk_size
, then the first
few bits are not yielded by the iterator at all. They can be accessed
with the .into_remainder()
method if the iterator is bound to a
name.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
§Sibling Methods
.rchunks_mut()
yields any leftover bits at the front as a shorter chunk during iteration..rchunks_exact()
has the same division logic, but each yielded bit-slice is immutable..chunks_exact_mut()
iterates from the front of the bit-slice backwards, with the unyielded remainder segment at the back edge.
§Panics
This panics if chunk_size
is 0
.
§Examples
use bitvec::prelude::*;
let bits = bits![mut u8, Msb0; 0; 5];
let mut iter = bits.rchunks_exact_mut(2);
for (idx, chunk) in iter.by_ref().enumerate() {
chunk.store(idx + 1);
}
iter.into_remainder().store(1u8);
assert_eq!(bits, bits![1, 1, 0, 0, 1]);
// remainder ^
pub fn split_at(&self, mid: usize) -> (&BitSlice<T, O>, &BitSlice<T, O>)
pub fn split_at(&self, mid: usize) -> (&BitSlice<T, O>, &BitSlice<T, O>)
Splits a bit-slice in two parts at an index.
The returned bit-slices are self[.. mid]
and self[mid ..]
. mid
is
included in the right bit-slice, not the left.
If mid
is 0
then the left bit-slice is empty; if it is self.len()
then the right bit-slice is empty.
This method guarantees that even when either partition is empty, the
encoded bit-pointer values of the bit-slice references is &self[0]
and
&self[mid]
.
§Original
§Panics
This panics if mid
is greater than self.len()
. It is allowed to be
equal to the length, in which case the right bit-slice is simply empty.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 0, 1, 1, 1];
let base = bits.as_bitptr();
let (a, b) = bits.split_at(0);
assert_eq!(unsafe { a.as_bitptr().offset_from(base) }, 0);
assert_eq!(unsafe { b.as_bitptr().offset_from(base) }, 0);
let (a, b) = bits.split_at(6);
assert_eq!(unsafe { b.as_bitptr().offset_from(base) }, 6);
let (a, b) = bits.split_at(3);
assert_eq!(a, bits![0; 3]);
assert_eq!(b, bits![1; 3]);
pub fn split_at_mut(
&mut self,
mid: usize
) -> (&mut BitSlice<<T as BitStore>::Alias, O>, &mut BitSlice<<T as BitStore>::Alias, O>)
pub fn split_at_mut( &mut self, mid: usize ) -> (&mut BitSlice<<T as BitStore>::Alias, O>, &mut BitSlice<<T as BitStore>::Alias, O>)
Splits a mutable bit-slice in two parts at an index.
The returned bit-slices are self[.. mid]
and self[mid ..]
. mid
is
included in the right bit-slice, not the left.
If mid
is 0
then the left bit-slice is empty; if it is self.len()
then the right bit-slice is empty.
This method guarantees that even when either partition is empty, the
encoded bit-pointer values of the bit-slice references is &self[0]
and
&self[mid]
.
§Original
§API Differences
The end bits of the left half and the start bits of the right half might
be stored in the same memory element. In order to avoid breaking
bitvec
’s memory-safety guarantees, both bit-slices are marked as
T::Alias
. This marking allows them to be used without interfering with
each other when they interact with memory.
§Panics
This panics if mid
is greater than self.len()
. It is allowed to be
equal to the length, in which case the right bit-slice is simply empty.
§Examples
use bitvec::prelude::*;
let bits = bits![mut u8, Msb0; 0; 6];
let base = bits.as_mut_bitptr();
let (a, b) = bits.split_at_mut(0);
assert_eq!(unsafe { a.as_mut_bitptr().offset_from(base) }, 0);
assert_eq!(unsafe { b.as_mut_bitptr().offset_from(base) }, 0);
let (a, b) = bits.split_at_mut(6);
assert_eq!(unsafe { b.as_mut_bitptr().offset_from(base) }, 6);
let (a, b) = bits.split_at_mut(3);
a.store(3);
b.store(5);
assert_eq!(bits, bits![0, 1, 1, 1, 0, 1]);
pub fn split<F>(&self, pred: F) -> Split<'_, T, O, F>
pub fn split<F>(&self, pred: F) -> Split<'_, T, O, F>
Iterates over subslices separated by bits that match a predicate. The matched bit is not contained in the yielded bit-slices.
§Original
§API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
§Sibling Methods
.split_mut()
has the same splitting logic, but each yielded bit-slice is mutable..split_inclusive()
includes the matched bit in the yielded bit-slice..rsplit()
iterates from the back of the bit-slice instead of the front..splitn()
times out aftern
yields.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 1, 0];
// ^
let mut iter = bits.split(|pos, _bit| pos % 3 == 2);
assert_eq!(iter.next().unwrap(), bits![0, 1]);
assert_eq!(iter.next().unwrap(), bits![0]);
assert!(iter.next().is_none());
If the first bit is matched, then an empty bit-slice will be the first item yielded by the iterator. Similarly, if the last bit in the bit-slice matches, then an empty bit-slice will be the last item yielded.
use bitvec::prelude::*;
let bits = bits![0, 0, 1];
// ^
let mut iter = bits.split(|_pos, bit| *bit);
assert_eq!(iter.next().unwrap(), bits![0; 2]);
assert!(iter.next().unwrap().is_empty());
assert!(iter.next().is_none());
If two matched bits are directly adjacent, then an empty bit-slice will be yielded between them:
use bitvec::prelude::*;
let bits = bits![1, 0, 0, 1];
// ^ ^
let mut iter = bits.split(|_pos, bit| !*bit);
assert_eq!(iter.next().unwrap(), bits![1]);
assert!(iter.next().unwrap().is_empty());
assert_eq!(iter.next().unwrap(), bits![1]);
assert!(iter.next().is_none());
pub fn split_mut<F>(&mut self, pred: F) -> SplitMut<'_, T, O, F>
pub fn split_mut<F>(&mut self, pred: F) -> SplitMut<'_, T, O, F>
Iterates over mutable subslices separated by bits that match a predicate. The matched bit is not contained in the yielded bit-slices.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
§Original
§API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
§Sibling Methods
.split()
has the same splitting logic, but each yielded bit-slice is immutable..split_inclusive_mut()
includes the matched bit in the yielded bit-slice..rsplit_mut()
iterates from the back of the bit-slice instead of the front..splitn_mut()
times out aftern
yields.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 1, 0, 1, 0];
// ^ ^
for group in bits.split_mut(|_pos, bit| *bit) {
group.set(0, true);
}
assert_eq!(bits, bits![1, 0, 1, 1, 1, 1]);
pub fn split_inclusive<F>(&self, pred: F) -> SplitInclusive<'_, T, O, F>
pub fn split_inclusive<F>(&self, pred: F) -> SplitInclusive<'_, T, O, F>
Iterates over subslices separated by bits that match a predicate. Unlike
.split()
, this does include the matching bit as the last bit in the
yielded bit-slice.
§Original
§API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
§Sibling Methods
.split_inclusive_mut()
has the same splitting logic, but each yielded bit-slice is mutable..split()
does not include the matched bit in the yielded bit-slice.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 1, 0, 1];
// ^ ^
let mut iter = bits.split_inclusive(|_pos, bit| *bit);
assert_eq!(iter.next().unwrap(), bits![0, 0, 1]);
assert_eq!(iter.next().unwrap(), bits![0, 1]);
assert!(iter.next().is_none());
pub fn split_inclusive_mut<F>(
&mut self,
pred: F
) -> SplitInclusiveMut<'_, T, O, F>
pub fn split_inclusive_mut<F>( &mut self, pred: F ) -> SplitInclusiveMut<'_, T, O, F>
Iterates over mutable subslices separated by bits that match a
predicate. Unlike .split_mut()
, this does include the matching bit
as the last bit in the bit-slice.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
§Original
§API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
§Sibling Methods
.split_inclusive()
has the same splitting logic, but each yielded bit-slice is immutable..split_mut()
does not include the matched bit in the yielded bit-slice.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 0, 0, 0];
// ^
for group in bits.split_inclusive_mut(|pos, _bit| pos % 3 == 2) {
group.set(0, true);
}
assert_eq!(bits, bits![1, 0, 0, 1, 0]);
pub fn rsplit<F>(&self, pred: F) -> RSplit<'_, T, O, F>
pub fn rsplit<F>(&self, pred: F) -> RSplit<'_, T, O, F>
Iterates over subslices separated by bits that match a predicate, from the back edge. The matched bit is not contained in the yielded bit-slices.
§Original
§API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
§Sibling Methods
.rsplit_mut()
has the same splitting logic, but each yielded bit-slice is mutable..split()
iterates from the front of the bit-slice instead of the back..rsplitn()
times out aftern
yields.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 1, 0];
// ^
let mut iter = bits.rsplit(|pos, _bit| pos % 3 == 2);
assert_eq!(iter.next().unwrap(), bits![0]);
assert_eq!(iter.next().unwrap(), bits![0, 1]);
assert!(iter.next().is_none());
If the last bit is matched, then an empty bit-slice will be the first item yielded by the iterator. Similarly, if the first bit in the bit-slice matches, then an empty bit-slice will be the last item yielded.
use bitvec::prelude::*;
let bits = bits![0, 0, 1];
// ^
let mut iter = bits.rsplit(|_pos, bit| *bit);
assert!(iter.next().unwrap().is_empty());
assert_eq!(iter.next().unwrap(), bits![0; 2]);
assert!(iter.next().is_none());
If two yielded bits are directly adjacent, then an empty bit-slice will be yielded between them:
use bitvec::prelude::*;
let bits = bits![1, 0, 0, 1];
// ^ ^
let mut iter = bits.split(|_pos, bit| !*bit);
assert_eq!(iter.next().unwrap(), bits![1]);
assert!(iter.next().unwrap().is_empty());
assert_eq!(iter.next().unwrap(), bits![1]);
assert!(iter.next().is_none());
pub fn rsplit_mut<F>(&mut self, pred: F) -> RSplitMut<'_, T, O, F>
pub fn rsplit_mut<F>(&mut self, pred: F) -> RSplitMut<'_, T, O, F>
Iterates over mutable subslices separated by bits that match a predicate, from the back. The matched bit is not contained in the yielded bit-slices.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
§Original
§API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
§Sibling Methods
.rsplit()
has the same splitting logic, but each yielded bit-slice is immutable..split_mut()
iterates from the front of the bit-slice to the back..rsplitn_mut()
iterates from the front of the bit-slice to the back.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 1, 0, 1, 0];
// ^ ^
for group in bits.rsplit_mut(|_pos, bit| *bit) {
group.set(0, true);
}
assert_eq!(bits, bits![1, 0, 1, 1, 1, 1]);
pub fn splitn<F>(&self, n: usize, pred: F) -> SplitN<'_, T, O, F>
pub fn splitn<F>(&self, n: usize, pred: F) -> SplitN<'_, T, O, F>
Iterates over subslices separated by bits that match a predicate, giving
up after yielding n
times. The n
th yield contains the rest of the
bit-slice. As with .split()
, the yielded bit-slices do not contain the
matched bit.
§Original
§API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
§Sibling Methods
.splitn_mut()
has the same splitting logic, but each yielded bit-slice is mutable..rsplitn()
iterates from the back of the bit-slice instead of the front..split()
has the same splitting logic, but never times out.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 1, 0, 1, 0];
let mut iter = bits.splitn(2, |_pos, bit| *bit);
assert_eq!(iter.next().unwrap(), bits![0, 0]);
assert_eq!(iter.next().unwrap(), bits![0, 1, 0]);
assert!(iter.next().is_none());
pub fn splitn_mut<F>(&mut self, n: usize, pred: F) -> SplitNMut<'_, T, O, F>
pub fn splitn_mut<F>(&mut self, n: usize, pred: F) -> SplitNMut<'_, T, O, F>
Iterates over mutable subslices separated by bits that match a
predicate, giving up after yielding n
times. The n
th yield contains
the rest of the bit-slice. As with .split_mut()
, the yielded
bit-slices do not contain the matched bit.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
§Original
§API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
§Sibling Methods
.splitn()
has the same splitting logic, but each yielded bit-slice is immutable..rsplitn_mut()
iterates from the back of the bit-slice instead of the front..split_mut()
has the same splitting logic, but never times out.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 1, 0, 1, 0];
for group in bits.splitn_mut(2, |_pos, bit| *bit) {
group.set(0, true);
}
assert_eq!(bits, bits![1, 0, 1, 1, 1, 0]);
pub fn rsplitn<F>(&self, n: usize, pred: F) -> RSplitN<'_, T, O, F>
pub fn rsplitn<F>(&self, n: usize, pred: F) -> RSplitN<'_, T, O, F>
Iterates over mutable subslices separated by bits that match a
predicate from the back edge, giving up after yielding n
times. The
n
th yield contains the rest of the bit-slice. As with .split_mut()
,
the yielded bit-slices do not contain the matched bit.
§Original
§API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
§Sibling Methods
.rsplitn_mut()
has the same splitting logic, but each yielded bit-slice is mutable..splitn()
: iterates from the front of the bit-slice instead of the back..rsplit()
has the same splitting logic, but never times out.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 1, 1, 0];
// ^
let mut iter = bits.rsplitn(2, |_pos, bit| *bit);
assert_eq!(iter.next().unwrap(), bits![0]);
assert_eq!(iter.next().unwrap(), bits![0, 0, 1]);
assert!(iter.next().is_none());
pub fn rsplitn_mut<F>(&mut self, n: usize, pred: F) -> RSplitNMut<'_, T, O, F>
pub fn rsplitn_mut<F>(&mut self, n: usize, pred: F) -> RSplitNMut<'_, T, O, F>
Iterates over mutable subslices separated by bits that match a
predicate from the back edge, giving up after yielding n
times. The
n
th yield contains the rest of the bit-slice. As with .split_mut()
,
the yielded bit-slices do not contain the matched bit.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
§Original
§API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
§Sibling Methods
.rsplitn()
has the same splitting logic, but each yielded bit-slice is immutable..splitn_mut()
iterates from the front of the bit-slice instead of the back..rsplit_mut()
has the same splitting logic, but never times out.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 1, 0, 0, 1, 0, 0, 0];
for group in bits.rsplitn_mut(2, |_idx, bit| *bit) {
group.set(0, true);
}
assert_eq!(bits, bits![1, 0, 1, 0, 0, 1, 1, 0, 0]);
// ^ group 2 ^ group 1
pub fn contains<T2, O2>(&self, other: &BitSlice<T2, O2>) -> boolwhere
T2: BitStore,
O2: BitOrder,
pub fn contains<T2, O2>(&self, other: &BitSlice<T2, O2>) -> boolwhere
T2: BitStore,
O2: BitOrder,
Tests if the bit-slice contains the given sequence anywhere within it.
This scans over self.windows(other.len())
until one of the windows
matches. The search key does not need to share type parameters with the
bit-slice being tested, as the comparison is bit-wise. However, sharing
type parameters will accelerate the comparison.
§Original
§Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 1, 0, 1, 1, 0, 0];
assert!( bits.contains(bits![0, 1, 1, 0]));
assert!(!bits.contains(bits![1, 0, 0, 1]));
pub fn starts_with<T2, O2>(&self, needle: &BitSlice<T2, O2>) -> boolwhere
T2: BitStore,
O2: BitOrder,
pub fn starts_with<T2, O2>(&self, needle: &BitSlice<T2, O2>) -> boolwhere
T2: BitStore,
O2: BitOrder,
Tests if the bit-slice begins with the given sequence.
The search key does not need to share type parameters with the bit-slice being tested, as the comparison is bit-wise. However, sharing type parameters will accelerate the comparison.
§Original
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 1, 0];
assert!( bits.starts_with(bits![0, 1]));
assert!(!bits.starts_with(bits![1, 0]));
This always returns true
if the needle is empty:
use bitvec::prelude::*;
let bits = bits![0, 1, 0];
let empty = bits![];
assert!(bits.starts_with(empty));
assert!(empty.starts_with(empty));
pub fn ends_with<T2, O2>(&self, needle: &BitSlice<T2, O2>) -> boolwhere
T2: BitStore,
O2: BitOrder,
pub fn ends_with<T2, O2>(&self, needle: &BitSlice<T2, O2>) -> boolwhere
T2: BitStore,
O2: BitOrder,
Tests if the bit-slice ends with the given sequence.
The search key does not need to share type parameters with the bit-slice being tested, as the comparison is bit-wise. However, sharing type parameters will accelerate the comparison.
§Original
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 1, 0];
assert!( bits.ends_with(bits![1, 0]));
assert!(!bits.ends_with(bits![0, 1]));
This always returns true
if the needle is empty:
use bitvec::prelude::*;
let bits = bits![0, 1, 0];
let empty = bits![];
assert!(bits.ends_with(empty));
assert!(empty.ends_with(empty));
pub fn strip_prefix<T2, O2>(
&self,
prefix: &BitSlice<T2, O2>
) -> Option<&BitSlice<T, O>>where
T2: BitStore,
O2: BitOrder,
pub fn strip_prefix<T2, O2>(
&self,
prefix: &BitSlice<T2, O2>
) -> Option<&BitSlice<T, O>>where
T2: BitStore,
O2: BitOrder,
Removes a prefix bit-slice, if present.
Like .starts_with()
, the search key does not need to share type
parameters with the bit-slice being stripped. If
self.starts_with(suffix)
, then this returns Some(&self[prefix.len() ..])
, otherwise it returns None
.
§Original
§API Differences
BitSlice
does not support pattern searches; instead, it permits self
and prefix
to differ in type parameters.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1, 0, 1, 1, 0];
assert_eq!(bits.strip_prefix(bits![0, 1]).unwrap(), bits[2 ..]);
assert_eq!(bits.strip_prefix(bits![0, 1, 0, 0,]).unwrap(), bits[4 ..]);
assert!(bits.strip_prefix(bits![1, 0]).is_none());
pub fn strip_suffix<T2, O2>(
&self,
suffix: &BitSlice<T2, O2>
) -> Option<&BitSlice<T, O>>where
T2: BitStore,
O2: BitOrder,
pub fn strip_suffix<T2, O2>(
&self,
suffix: &BitSlice<T2, O2>
) -> Option<&BitSlice<T, O>>where
T2: BitStore,
O2: BitOrder,
Removes a suffix bit-slice, if present.
Like .ends_with()
, the search key does not need to share type
parameters with the bit-slice being stripped. If
self.ends_with(suffix)
, then this returns Some(&self[.. self.len() - suffix.len()])
, otherwise it returns None
.
§Original
§API Differences
BitSlice
does not support pattern searches; instead, it permits self
and suffix
to differ in type parameters.
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1, 0, 1, 1, 0];
assert_eq!(bits.strip_suffix(bits![1, 0]).unwrap(), bits[.. 7]);
assert_eq!(bits.strip_suffix(bits![0, 1, 1, 0]).unwrap(), bits[.. 5]);
assert!(bits.strip_suffix(bits![0, 1]).is_none());
pub fn rotate_left(&mut self, by: usize)
pub fn rotate_left(&mut self, by: usize)
Rotates the contents of a bit-slice to the left (towards the zero index).
This essentially splits the bit-slice at by
, then exchanges the two
pieces. self[.. by]
becomes the first section, and is then followed by
self[.. by]
.
The implementation is batch-accelerated where possible. It should have a
runtime complexity much lower than O(by)
.
§Original
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 1, 0, 1, 0];
// split occurs here ^
bits.rotate_left(2);
assert_eq!(bits, bits![1, 0, 1, 0, 0, 0]);
pub fn rotate_right(&mut self, by: usize)
pub fn rotate_right(&mut self, by: usize)
Rotates the contents of a bit-slice to the right (away from the zero index).
This essentially splits the bit-slice at self.len() - by
, then
exchanges the two pieces. self[len - by ..]
becomes the first section,
and is then followed by self[.. len - by]
.
The implementation is batch-accelerated where possible. It should have a
runtime complexity much lower than O(by)
.
§Original
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 1, 1, 1, 0];
// split occurs here ^
bits.rotate_right(2);
assert_eq!(bits, bits![1, 0, 0, 0, 1, 1]);
pub fn fill(&mut self, value: bool)
pub fn fill(&mut self, value: bool)
Fills the bit-slice with a given bit.
This is a recent stabilization in the standard library. bitvec
previously offered this behavior as the novel API .set_all()
. That
method name is now removed in favor of this standard-library analogue.
§Original
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 5];
bits.fill(true);
assert_eq!(bits, bits![1; 5]);
pub fn fill_with<F>(&mut self, func: F)
pub fn fill_with<F>(&mut self, func: F)
Fills the bit-slice with bits produced by a generator function.
§Original
§API Differences
The generator function receives the index of the bit being initialized as an argument.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 5];
bits.fill_with(|idx| idx % 2 == 0);
assert_eq!(bits, bits![1, 0, 1, 0, 1]);
pub fn clone_from_slice<T2, O2>(&mut self, src: &BitSlice<T2, O2>)where
T2: BitStore,
O2: BitOrder,
.clone_from_bitslice()
insteadpub fn copy_from_slice(&mut self, src: &BitSlice<T, O>)
.copy_from_bitslice()
insteadpub fn copy_within<R>(&mut self, src: R, dest: usize)where
R: RangeExt<usize>,
pub fn copy_within<R>(&mut self, src: R, dest: usize)where
R: RangeExt<usize>,
Copies a span of bits to another location in the bit-slice.
src
is the range of bit-indices in the bit-slice to copy, and dest is the starting index of the destination range.
srcand
dest .. dest +
src.len()are permitted to overlap; the copy will automatically detect and manage this. However, both
srcand
dest .. dest + src.len()**must** fall within the bounds of
self`.
§Original
§Panics
This panics if either the source or destination range exceed
self.len()
.
§Examples
use bitvec::prelude::*;
let bits = bits![mut 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0];
bits.copy_within(1 .. 5, 8);
// v v v v
assert_eq!(bits, bits![1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0]);
// ^ ^ ^ ^
pub fn swap_with_slice<T2, O2>(&mut self, other: &mut BitSlice<T2, O2>)where
T2: BitStore,
O2: BitOrder,
.swap_with_bitslice()
insteadpub unsafe fn align_to<U>(
&self
) -> (&BitSlice<T, O>, &BitSlice<U, O>, &BitSlice<T, O>)where
U: BitStore,
pub unsafe fn align_to<U>(
&self
) -> (&BitSlice<T, O>, &BitSlice<U, O>, &BitSlice<T, O>)where
U: BitStore,
Produces bit-slice view(s) with different underlying storage types.
This may have unexpected effects, and you cannot assume that
before[idx] == after[idx]
! Consult the tables in the manual
for information about memory layouts.
§Original
§Notes
Unlike the standard library documentation, this explicitly guarantees that the middle bit-slice will have maximal size. You may rely on this property.
§Safety
You may not use this to cast away alias protections. Rust does not have
support for higher-kinded types, so this cannot express the relation
Outer<T> -> Outer<U> where Outer: BitStoreContainer
, but memory safety
does require that you respect this rule. Reälign integers to integers,
Cell
s to Cell
s, and atomics to atomics, but do not cross these
boundaries.
§Examples
use bitvec::prelude::*;
let bytes: [u8; 7] = [1, 2, 3, 4, 5, 6, 7];
let bits = bytes.view_bits::<Lsb0>();
let (pfx, mid, sfx) = unsafe {
bits.align_to::<u16>()
};
assert!(pfx.len() <= 8);
assert_eq!(mid.len(), 48);
assert!(sfx.len() <= 8);
pub unsafe fn align_to_mut<U>(
&mut self
) -> (&mut BitSlice<T, O>, &mut BitSlice<U, O>, &mut BitSlice<T, O>)where
U: BitStore,
pub unsafe fn align_to_mut<U>(
&mut self
) -> (&mut BitSlice<T, O>, &mut BitSlice<U, O>, &mut BitSlice<T, O>)where
U: BitStore,
Produces bit-slice view(s) with different underlying storage types.
This may have unexpected effects, and you cannot assume that
before[idx] == after[idx]
! Consult the tables in the manual
for information about memory layouts.
§Original
§Notes
Unlike the standard library documentation, this explicitly guarantees that the middle bit-slice will have maximal size. You may rely on this property.
§Safety
You may not use this to cast away alias protections. Rust does not have
support for higher-kinded types, so this cannot express the relation
Outer<T> -> Outer<U> where Outer: BitStoreContainer
, but memory safety
does require that you respect this rule. Reälign integers to integers,
Cell
s to Cell
s, and atomics to atomics, but do not cross these
boundaries.
§Examples
use bitvec::prelude::*;
let mut bytes: [u8; 7] = [1, 2, 3, 4, 5, 6, 7];
let bits = bytes.view_bits_mut::<Lsb0>();
let (pfx, mid, sfx) = unsafe {
bits.align_to_mut::<u16>()
};
assert!(pfx.len() <= 8);
assert_eq!(mid.len(), 48);
assert!(sfx.len() <= 8);
§impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
Views of underlying memory.
pub fn bit_domain(&self) -> BitDomain<'_, Const, T, O>
pub fn bit_domain(&self) -> BitDomain<'_, Const, T, O>
Partitions a bit-slice into maybe-contended and known-uncontended parts.
The documentation of BitDomain
goes into this in more detail. In
short, this produces a &BitSlice
that is as large as possible without
requiring alias protection, as well as any bits that were not able to be
included in the unaliased bit-slice.
pub fn bit_domain_mut(&mut self) -> BitDomain<'_, Mut, T, O>
pub fn bit_domain_mut(&mut self) -> BitDomain<'_, Mut, T, O>
Partitions a mutable bit-slice into maybe-contended and known-uncontended parts.
The documentation of BitDomain
goes into this in more detail. In
short, this produces a &mut BitSlice
that is as large as possible
without requiring alias protection, as well as any bits that were not
able to be included in the unaliased bit-slice.
pub fn domain(&self) -> Domain<'_, Const, T, O>
pub fn domain(&self) -> Domain<'_, Const, T, O>
Views the underlying memory of a bit-slice, removing alias protections where possible.
The documentation of Domain
goes into this in more detail. In short,
this produces a &[T]
slice with alias protections removed, covering
all elements that self
completely fills. Partially-used elements on
either the front or back edge of the slice are returned separately.
pub fn domain_mut(&mut self) -> Domain<'_, Mut, T, O>
pub fn domain_mut(&mut self) -> Domain<'_, Mut, T, O>
Views the underlying memory of a bit-slice, removing alias protections where possible.
The documentation of Domain
goes into this in more detail. In short,
this produces a &mut [T]
slice with alias protections removed,
covering all elements that self
completely fills. Partially-used
elements on the front or back edge of the slice are returned separately.
§impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
pub fn to_bitvec(&self) -> BitVec<<T as BitStore>::Unalias, O> ⓘ
pub fn to_bitvec(&self) -> BitVec<<T as BitStore>::Unalias, O> ⓘ
Copies a bit-slice into an owned bit-vector.
Since the new vector is freshly owned, this gets marked as ::Unalias
to remove any guards that may have been inserted by the bit-slice’s
history.
It does not use the underlying memory type, so that a BitSlice<_, Cell<_>>
will produce a BitVec<_, Cell<_>>
.
§Original
§Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 1];
let bv = bits.to_bitvec();
assert_eq!(bits, bv);
Examples found in repository?
346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380
async fn encrypt(
args: &ExecuteArgs,
mut executor_channel: Channel<Message<BooleanGmw>>,
ot_channel: Option<Channel<mul_triple::boolean::ot_ext::DefaultMsg>>,
shared_file: &BitSlice<usize>,
shared_key: &BitSlice<usize>,
shared_iv: &BitSlice<usize>,
) -> Result<Output<BitVec<usize>>> {
let exec_circ: ExecutableCircuit<bool, BooleanGate, usize> = bincode::deserialize_from(
BufReader::new(File::open(&args.circuit).context("Failed to open circuit file")?),
)?;
let mut input = shared_key.to_bitvec();
input.extend_from_bitslice(shared_iv);
input.extend_from_bitslice(shared_file);
let mtp = match ot_channel {
None => InsecureMTProvider::default().into_dyn(),
Some(ot_channel) => mul_triple::boolean::ot_ext::OtMTProvider::new_with_default_ot_ext(
OsRng,
ot_channel.0,
ot_channel.1,
)
.into_dyn(),
};
let mut executor: Executor<BooleanGmw, usize> = Executor::new(&exec_circ, args.id, mtp).await?;
Ok(executor
.execute(
Input::Scalar(input),
&mut executor_channel.0,
&mut executor_channel.1,
)
.await?)
}
§impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
pub fn to_vec(&self) -> BitVec<<T as BitStore>::Unalias, O> ⓘ
.to_bitvec()
insteadpub fn repeat(&self, n: usize) -> BitVec<<T as BitStore>::Unalias, O> ⓘ
pub fn repeat(&self, n: usize) -> BitVec<<T as BitStore>::Unalias, O> ⓘ
Creates a bit-vector by repeating a bit-slice n
times.
§Original
§Panics
This method panics if self.len() * n
exceeds the BitVec
capacity.
§Examples
use bitvec::prelude::*;
assert_eq!(bits![0, 1].repeat(3), bitvec![0, 1, 0, 1, 0, 1]);
This panics by exceeding bit-vector maximum capacity:
use bitvec::prelude::*;
bits![0, 1].repeat(BitSlice::<usize, Lsb0>::MAX_BITS);
impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
Crate internals.
Trait Implementations
§impl<T, O> Binary for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> Binary for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
§Bit-Slice Rendering
This implementation prints the contents of a &BitSlice
in one of binary,
octal, or hexadecimal. It is important to note that this does not render the
raw underlying memory! They render the semantically-ordered contents of the
bit-slice as numerals. This distinction matters if you use type parameters that
differ from those presumed by your debugger (which is usually <u8, Msb0>
).
The output separates the T
elements as individual list items, and renders each
element as a base- 2, 8, or 16 numeric string. When walking an element, the bits
traversed by the bit-slice are considered to be stored in
most-significant-bit-first ordering. This means that index [0]
is the high bit
of the left-most digit, and index [n]
is the low bit of the right-most digit,
in a given printed word.
In order to render according to expectations of the Arabic numeral system, an
element being transcribed is chunked into digits from the least-significant end
of its rendered form. This is most noticeable in octal, which will always have a
smaller ceiling on the left-most digit in a printed word, while the right-most
digit in that word is able to use the full 0 ..= 7
numeral range.
§Examples
use bitvec::prelude::*;
let data = [
0b000000_10u8,
// digits print LTR
0b10_001_101,
// significance is computed RTL
0b01_000000,
];
let bits = &data.view_bits::<Msb0>()[6 .. 18];
assert_eq!(format!("{:b}", bits), "[10, 10001101, 01]");
assert_eq!(format!("{:o}", bits), "[2, 215, 1]");
assert_eq!(format!("{:X}", bits), "[2, 8D, 1]");
The {:#}
format modifier causes the standard 0b
, 0o
, or 0x
prefix to be
applied to each printed word. The other format specifiers are not interpreted by
this implementation, and apply to the entire rendered text, not to individual
words.
§impl<A, O> BitAndAssign<&BitArray<A, O>> for BitSlice<<A as BitView>::Store, O>where
A: BitViewSized,
O: BitOrder,
impl<A, O> BitAndAssign<&BitArray<A, O>> for BitSlice<<A as BitView>::Store, O>where
A: BitViewSized,
O: BitOrder,
§fn bitand_assign(&mut self, rhs: &BitArray<A, O>)
fn bitand_assign(&mut self, rhs: &BitArray<A, O>)
&=
operation. Read more§impl<T, O> BitAndAssign<&BitBox<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitAndAssign<&BitBox<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
§fn bitand_assign(&mut self, rhs: &BitBox<T, O>)
fn bitand_assign(&mut self, rhs: &BitBox<T, O>)
&=
operation. Read more§impl<T1, T2, O1, O2> BitAndAssign<&BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> BitAndAssign<&BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
§fn bitand_assign(&mut self, rhs: &BitSlice<T2, O2>)
fn bitand_assign(&mut self, rhs: &BitSlice<T2, O2>)
§Boolean Arithmetic
This merges another bit-slice into self
with a Boolean arithmetic operation.
If the other bit-slice is shorter than self
, it is zero-extended. For BitAnd
,
this clears all excess bits of self
to 0
; for BitOr
and BitXor
, it
leaves them untouched
§Behavior
The Boolean operation proceeds across each bit-slice in iteration order. This is
3O(n)
in the length of the shorter of self
and rhs
. However, it can be
accelerated if rhs
has the same type parameters as self
, and both are using
one of the orderings provided by bitvec
. In this case, the implementation
specializes to use BitField
batch operations to operate on the slices one word
at a time, rather than one bit.
Acceleration is not currently provided for custom bit-orderings that use the same storage type.
§Pre-1.0
Behavior
In the 0.
development series, Boolean arithmetic was implemented against all
I: Iterator<Item = bool>
. This allowed code such as bits |= [false, true];
,
but forbad acceleration in the most common use case (combining two bit-slices)
because BitSlice
is not such an iterator.
Usage surveys indicate that it is better for the arithmetic operators to operate
on bit-slices, and to allow the possibility of specialized acceleration, rather
than to allow folding against any iterator of bool
s.
If pre-1.0
code relies on this behavior specifically, and has non-BitSlice
arguments to the Boolean sigils, then they will need to be replaced with the
equivalent loop.
§Examples
use bitvec::prelude::*;
let a = bits![mut 0, 0, 1, 1];
let b = bits![ 0, 1, 0, 1];
*a ^= b;
assert_eq!(a, bits![0, 1, 1, 0]);
let c = bits![mut 0, 0, 1, 1];
let d = [false, true, false, true];
// no longer allowed
// c &= d.into_iter().by_vals();
for (mut c, d) in c.iter_mut().zip(d.into_iter())
{
*c ^= d;
}
assert_eq!(c, bits![0, 1, 1, 0]);
§impl<T, O> BitAndAssign<&BitVec<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitAndAssign<&BitVec<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
§fn bitand_assign(&mut self, rhs: &BitVec<T, O>)
fn bitand_assign(&mut self, rhs: &BitVec<T, O>)
&=
operation. Read more§impl<A, O> BitAndAssign<BitArray<A, O>> for BitSlice<<A as BitView>::Store, O>where
A: BitViewSized,
O: BitOrder,
impl<A, O> BitAndAssign<BitArray<A, O>> for BitSlice<<A as BitView>::Store, O>where
A: BitViewSized,
O: BitOrder,
§fn bitand_assign(&mut self, rhs: BitArray<A, O>)
fn bitand_assign(&mut self, rhs: BitArray<A, O>)
&=
operation. Read more§impl<T, O> BitAndAssign<BitBox<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitAndAssign<BitBox<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
§fn bitand_assign(&mut self, rhs: BitBox<T, O>)
fn bitand_assign(&mut self, rhs: BitBox<T, O>)
&=
operation. Read more§impl<T, O> BitAndAssign<BitVec<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitAndAssign<BitVec<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
§fn bitand_assign(&mut self, rhs: BitVec<T, O>)
fn bitand_assign(&mut self, rhs: BitVec<T, O>)
&=
operation. Read more§impl<T> BitField for BitSlice<T>where
T: BitStore,
impl<T> BitField for BitSlice<T>where
T: BitStore,
§Lsb0
Bit-Field Behavior
BitField
has no requirements about the in-memory representation or layout of
stored integers within a bit-slice, only that round-tripping an integer through
a store and a load of the same element suffix on the same bit-slice is
idempotent (with respect to sign truncation).
Lsb0
provides a contiguous translation from bit-index to real memory: for any
given bit index n
and its position P(n)
, P(n + 1)
is P(n) + 1
. This
allows it to provide batched behavior: since the section of contiguous indices
used within an element translates to a section of contiguous bits in real
memory, the transaction is always a single shift/mask operation.
Each implemented method contains documentation and examples showing exactly how the abstract integer space is mapped to real memory.
§fn load_le<I>(&self) -> Iwhere
I: Integral,
fn load_le<I>(&self) -> Iwhere
I: Integral,
§Lsb0
Little-Endian Integer Loading
This implementation uses the Lsb0
bit-ordering to determine which bits in a
partially-occupied memory element contain the contents of an integer to be
loaded, using little-endian element ordering.
See the trait method definition for an overview of what element ordering means.
§Signed-Integer Loading
As described in the trait definition, when loading as a signed integer, the most significant bit loaded from memory is sign-extended to the full width of the returned type. In this method, that means the most-significant loaded bit of the final element.
§Examples
In each memory element, the Lsb0
ordering counts indices leftward from the
right edge:
use bitvec::prelude::*;
let raw = 0b00_10110_0u8;
// 76 54321 0
// ^ sign bit
assert_eq!(
raw.view_bits::<Lsb0>()
[1 .. 6]
.load_le::<u8>(),
0b000_10110,
);
assert_eq!(
raw.view_bits::<Lsb0>()
[1 .. 6]
.load_le::<i8>(),
0b111_10110u8 as i8,
);
In bit-slices that span multiple elements, the little-endian element ordering means that the slice index increases with numerical significance:
use bitvec::prelude::*;
let raw = [
0x8_Fu8,
// 7 0
0x0_1u8,
// 15 8
0b1111_0010u8,
// ^ sign bit
// 23 16
];
assert_eq!(
raw.view_bits::<Lsb0>()
[4 .. 20]
.load_le::<u16>(),
0x2018u16,
);
Note that while these examples use u8
storage for convenience in displaying
the literals, BitField
operates identically with any storage type. As most
machines use little-endian byte ordering within wider element types, and
bitvec
exclusively operates on elements, the actual bytes of memory may
rapidly start to behave oddly when translating between numeric literals and
in-memory representation.
The user guide has a chapter that translates bit indices into memory positions
for each combination of <T: BitStore, O: BitOrder>
, and may be of additional
use when choosing a combination of type parameters and load functions.
§fn load_be<I>(&self) -> Iwhere
I: Integral,
fn load_be<I>(&self) -> Iwhere
I: Integral,
§Lsb0
Big-Endian Integer Loading
This implementation uses the Lsb0
bit-ordering to determine which bits in a
partially-occupied memory element contain the contents of an integer to be
loaded, using big-endian element ordering.
See the trait method definition for an overview of what element ordering means.
§Signed-Integer Loading
As described in the trait definition, when loading as a signed integer, the most significant bit loaded from memory is sign-extended to the full width of the returned type. In this method, that means that the most-significant bit of the first element.
§Examples
In each memory element, the Lsb0
ordering counts indices leftward from the
right edge:
use bitvec::prelude::*;
let raw = 0b00_10110_0u8;
// 76 54321 0
// ^ sign bit
assert_eq!(
raw.view_bits::<Lsb0>()
[1 .. 6]
.load_be::<u8>(),
0b000_10110,
);
assert_eq!(
raw.view_bits::<Lsb0>()
[1 .. 6]
.load_be::<i8>(),
0b111_10110u8 as i8,
);
In bit-slices that span multiple elements, the big-endian element ordering means that the slice index increases while numeric significance decreases:
use bitvec::prelude::*;
let raw = [
0b0010_1111u8,
// ^ sign bit
// 7 0
0x0_1u8,
// 15 8
0xF_8u8,
// 23 16
];
assert_eq!(
raw.view_bits::<Lsb0>()
[4 .. 20]
.load_be::<u16>(),
0x2018u16,
);
Note that while these examples use u8
storage for convenience in displaying
the literals, BitField
operates identically with any storage type. As most
machines use little-endian byte ordering within wider element types, and
bitvec
exclusively operates on elements, the actual bytes of memory may
rapidly start to behave oddly when translating between numeric literals and
in-memory representation.
The user guide has a chapter that translates bit indices into memory positions
for each combination of <T: BitStore, O: BitOrder>
, and may be of additional
use when choosing a combination of type parameters and load functions.
§fn store_le<I>(&mut self, value: I)where
I: Integral,
fn store_le<I>(&mut self, value: I)where
I: Integral,
§Lsb0
Little-Endian Integer Storing
This implementation uses the Lsb0
bit-ordering to determine which bits in a
partially-occupied memory element are used for storage, using little-endian
element ordering.
See the trait method definition for an overview of what element ordering means.
§Narrowing Behavior
Integers are truncated from the high end. When storing into a bit-slice of
length n
, the n
least numerically significant bits are stored, and any
remaining high bits are ignored.
Be aware of this behavior if you are storing signed integers! The signed integer
-14i8
(bit pattern 0b1111_0010u8
) will, when stored into and loaded back
from a 4-bit slice, become the value 2i8
.
§Examples
use bitvec::prelude::*;
let mut raw = 0u8;
raw.view_bits_mut::<Lsb0>()
[1 .. 6]
.store_le(22u8);
assert_eq!(raw, 0b00_10110_0);
// 76 54321 0
raw.view_bits_mut::<Lsb0>()
[1 .. 6]
.store_le(-10i8);
assert_eq!(raw, 0b00_10110_0);
In bit-slices that span multiple elements, the little-endian element ordering means that the slice index increases with numerical significance:
use bitvec::prelude::*;
let mut raw = [!0u8; 3];
raw.view_bits_mut::<Lsb0>()
[4 .. 20]
.store_le(0x2018u16);
assert_eq!(raw, [
0x8_F,
// 7 0
0x0_1,
// 15 8
0xF_2,
// 23 16
]);
Note that while these examples use u8
storage for convenience in displaying
the literals, BitField
operates identically with any storage type. As most
machines use little-endian byte ordering within wider element types, and
bitvec
exclusively operates on elements, the actual bytes of memory may
rapidly start to behave oddly when translating between numeric literals and
in-memory representation.
The user guide has a chapter that translates bit indices into memory positions
for each combination of <T: BitStore, O: BitOrder>
, and may be of additional
use when choosing a combination of type parameters and store functions.
§fn store_be<I>(&mut self, value: I)where
I: Integral,
fn store_be<I>(&mut self, value: I)where
I: Integral,
§Lsb0
Big-Endian Integer Storing
This implementation uses the Lsb0
bit-ordering to determine which bits in a
partially-occupied memory element are used for storage, using big-endian element
ordering.
See the trait method definition for an overview of what element ordering means.
§Narrowing Behavior
Integers are truncated from the high end. When storing into a bit-slice of
length n
, the n
least numerically significant bits are stored, and any
remaining high bits are ignored.
Be aware of this behavior if you are storing signed integers! The signed integer
-14i8
(bit pattern 0b1111_0010u8
) will, when stored into and loaded back
from a 4-bit slice, become the value 2i8
.
§Examples
use bitvec::prelude::*;
let mut raw = 0u8;
raw.view_bits_mut::<Lsb0>()
[1 .. 6]
.store_be(22u8);
assert_eq!(raw, 0b00_10110_0);
// 76 54321 0
raw.view_bits_mut::<Lsb0>()
[1 .. 6]
.store_be(-10i8);
assert_eq!(raw, 0b00_10110_0);
In bit-slices that span multiple elements, the big-endian element ordering means that the slice index increases while numerical significance decreases:
use bitvec::prelude::*;
let mut raw = [!0u8; 3];
raw.view_bits_mut::<Lsb0>()
[4 .. 20]
.store_be(0x2018u16);
assert_eq!(raw, [
0x2_F,
// 7 0
0x0_1,
// 15 8
0xF_8,
// 23 16
]);
Note that while these examples use u8
storage for convenience in displaying
the literals, BitField
operates identically with any storage type. As most
machines use little-endian byte ordering within wider element types, and
bitvec
exclusively operates on elements, the actual bytes of memory may
rapidly start to behave oddly when translating between numeric literals and
in-memory representation.
The user guide has a chapter that translates bit indices into memory positions
for each combination of <T: BitStore, O: BitOrder>
, and may be of additional
use when choosing a combination of type parameters and store functions.
§impl<A, O> BitOrAssign<&BitArray<A, O>> for BitSlice<<A as BitView>::Store, O>where
A: BitViewSized,
O: BitOrder,
impl<A, O> BitOrAssign<&BitArray<A, O>> for BitSlice<<A as BitView>::Store, O>where
A: BitViewSized,
O: BitOrder,
§fn bitor_assign(&mut self, rhs: &BitArray<A, O>)
fn bitor_assign(&mut self, rhs: &BitArray<A, O>)
|=
operation. Read more§impl<T, O> BitOrAssign<&BitBox<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitOrAssign<&BitBox<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
§fn bitor_assign(&mut self, rhs: &BitBox<T, O>)
fn bitor_assign(&mut self, rhs: &BitBox<T, O>)
|=
operation. Read more§impl<T1, T2, O1, O2> BitOrAssign<&BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> BitOrAssign<&BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
§fn bitor_assign(&mut self, rhs: &BitSlice<T2, O2>)
fn bitor_assign(&mut self, rhs: &BitSlice<T2, O2>)
§Boolean Arithmetic
This merges another bit-slice into self
with a Boolean arithmetic operation.
If the other bit-slice is shorter than self
, it is zero-extended. For BitAnd
,
this clears all excess bits of self
to 0
; for BitOr
and BitXor
, it
leaves them untouched
§Behavior
The Boolean operation proceeds across each bit-slice in iteration order. This is
3O(n)
in the length of the shorter of self
and rhs
. However, it can be
accelerated if rhs
has the same type parameters as self
, and both are using
one of the orderings provided by bitvec
. In this case, the implementation
specializes to use BitField
batch operations to operate on the slices one word
at a time, rather than one bit.
Acceleration is not currently provided for custom bit-orderings that use the same storage type.
§Pre-1.0
Behavior
In the 0.
development series, Boolean arithmetic was implemented against all
I: Iterator<Item = bool>
. This allowed code such as bits |= [false, true];
,
but forbad acceleration in the most common use case (combining two bit-slices)
because BitSlice
is not such an iterator.
Usage surveys indicate that it is better for the arithmetic operators to operate
on bit-slices, and to allow the possibility of specialized acceleration, rather
than to allow folding against any iterator of bool
s.
If pre-1.0
code relies on this behavior specifically, and has non-BitSlice
arguments to the Boolean sigils, then they will need to be replaced with the
equivalent loop.
§Examples
use bitvec::prelude::*;
let a = bits![mut 0, 0, 1, 1];
let b = bits![ 0, 1, 0, 1];
*a ^= b;
assert_eq!(a, bits![0, 1, 1, 0]);
let c = bits![mut 0, 0, 1, 1];
let d = [false, true, false, true];
// no longer allowed
// c &= d.into_iter().by_vals();
for (mut c, d) in c.iter_mut().zip(d.into_iter())
{
*c ^= d;
}
assert_eq!(c, bits![0, 1, 1, 0]);
§impl<T, O> BitOrAssign<&BitVec<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitOrAssign<&BitVec<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
§fn bitor_assign(&mut self, rhs: &BitVec<T, O>)
fn bitor_assign(&mut self, rhs: &BitVec<T, O>)
|=
operation. Read more§impl<A, O> BitOrAssign<BitArray<A, O>> for BitSlice<<A as BitView>::Store, O>where
A: BitViewSized,
O: BitOrder,
impl<A, O> BitOrAssign<BitArray<A, O>> for BitSlice<<A as BitView>::Store, O>where
A: BitViewSized,
O: BitOrder,
§fn bitor_assign(&mut self, rhs: BitArray<A, O>)
fn bitor_assign(&mut self, rhs: BitArray<A, O>)
|=
operation. Read more§impl<T, O> BitOrAssign<BitBox<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitOrAssign<BitBox<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
§fn bitor_assign(&mut self, rhs: BitBox<T, O>)
fn bitor_assign(&mut self, rhs: BitBox<T, O>)
|=
operation. Read more§impl<T, O> BitOrAssign<BitVec<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitOrAssign<BitVec<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
§fn bitor_assign(&mut self, rhs: BitVec<T, O>)
fn bitor_assign(&mut self, rhs: BitVec<T, O>)
|=
operation. Read more§impl<A, O> BitXorAssign<&BitArray<A, O>> for BitSlice<<A as BitView>::Store, O>where
A: BitViewSized,
O: BitOrder,
impl<A, O> BitXorAssign<&BitArray<A, O>> for BitSlice<<A as BitView>::Store, O>where
A: BitViewSized,
O: BitOrder,
§fn bitxor_assign(&mut self, rhs: &BitArray<A, O>)
fn bitxor_assign(&mut self, rhs: &BitArray<A, O>)
^=
operation. Read more§impl<T, O> BitXorAssign<&BitBox<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitXorAssign<&BitBox<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
§fn bitxor_assign(&mut self, rhs: &BitBox<T, O>)
fn bitxor_assign(&mut self, rhs: &BitBox<T, O>)
^=
operation. Read more§impl<T1, T2, O1, O2> BitXorAssign<&BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> BitXorAssign<&BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
§fn bitxor_assign(&mut self, rhs: &BitSlice<T2, O2>)
fn bitxor_assign(&mut self, rhs: &BitSlice<T2, O2>)
§Boolean Arithmetic
This merges another bit-slice into self
with a Boolean arithmetic operation.
If the other bit-slice is shorter than self
, it is zero-extended. For BitAnd
,
this clears all excess bits of self
to 0
; for BitOr
and BitXor
, it
leaves them untouched
§Behavior
The Boolean operation proceeds across each bit-slice in iteration order. This is
3O(n)
in the length of the shorter of self
and rhs
. However, it can be
accelerated if rhs
has the same type parameters as self
, and both are using
one of the orderings provided by bitvec
. In this case, the implementation
specializes to use BitField
batch operations to operate on the slices one word
at a time, rather than one bit.
Acceleration is not currently provided for custom bit-orderings that use the same storage type.
§Pre-1.0
Behavior
In the 0.
development series, Boolean arithmetic was implemented against all
I: Iterator<Item = bool>
. This allowed code such as bits |= [false, true];
,
but forbad acceleration in the most common use case (combining two bit-slices)
because BitSlice
is not such an iterator.
Usage surveys indicate that it is better for the arithmetic operators to operate
on bit-slices, and to allow the possibility of specialized acceleration, rather
than to allow folding against any iterator of bool
s.
If pre-1.0
code relies on this behavior specifically, and has non-BitSlice
arguments to the Boolean sigils, then they will need to be replaced with the
equivalent loop.
§Examples
use bitvec::prelude::*;
let a = bits![mut 0, 0, 1, 1];
let b = bits![ 0, 1, 0, 1];
*a ^= b;
assert_eq!(a, bits![0, 1, 1, 0]);
let c = bits![mut 0, 0, 1, 1];
let d = [false, true, false, true];
// no longer allowed
// c &= d.into_iter().by_vals();
for (mut c, d) in c.iter_mut().zip(d.into_iter())
{
*c ^= d;
}
assert_eq!(c, bits![0, 1, 1, 0]);
§impl<T, O> BitXorAssign<&BitVec<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitXorAssign<&BitVec<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
§fn bitxor_assign(&mut self, rhs: &BitVec<T, O>)
fn bitxor_assign(&mut self, rhs: &BitVec<T, O>)
^=
operation. Read more§impl<A, O> BitXorAssign<BitArray<A, O>> for BitSlice<<A as BitView>::Store, O>where
A: BitViewSized,
O: BitOrder,
impl<A, O> BitXorAssign<BitArray<A, O>> for BitSlice<<A as BitView>::Store, O>where
A: BitViewSized,
O: BitOrder,
§fn bitxor_assign(&mut self, rhs: BitArray<A, O>)
fn bitxor_assign(&mut self, rhs: BitArray<A, O>)
^=
operation. Read more§impl<T, O> BitXorAssign<BitBox<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitXorAssign<BitBox<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
§fn bitxor_assign(&mut self, rhs: BitBox<T, O>)
fn bitxor_assign(&mut self, rhs: BitBox<T, O>)
^=
operation. Read more§impl<T, O> BitXorAssign<BitVec<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitXorAssign<BitVec<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
§fn bitxor_assign(&mut self, rhs: BitVec<T, O>)
fn bitxor_assign(&mut self, rhs: BitVec<T, O>)
^=
operation. Read more§impl<T, O> Index<RangeInclusive<usize>> for BitSlice<T, O>where
O: BitOrder,
T: BitStore,
impl<T, O> Index<RangeInclusive<usize>> for BitSlice<T, O>where
O: BitOrder,
T: BitStore,
§impl<T, O> Index<RangeToInclusive<usize>> for BitSlice<T, O>where
O: BitOrder,
T: BitStore,
impl<T, O> Index<RangeToInclusive<usize>> for BitSlice<T, O>where
O: BitOrder,
T: BitStore,
§impl<T, O> Index<usize> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> Index<usize> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
§fn index(&self, index: usize) -> &<BitSlice<T, O> as Index<usize>>::Output ⓘ
fn index(&self, index: usize) -> &<BitSlice<T, O> as Index<usize>>::Output ⓘ
Looks up a single bit by its semantic index.
§Examples
use bitvec::prelude::*;
let bits = bits![u8, Msb0; 0, 1, 0];
assert!(!bits[0]); // -----^ | |
assert!( bits[1]); // --------^ |
assert!(!bits[2]); // -----------^
If the index is greater than or equal to the length, indexing will panic.
The below test will panic when accessing index 1, as only index 0 is valid.
use bitvec::prelude::*;
let bits = bits![0, ];
bits[1]; // --------^
§impl<T, O> IndexMut<RangeInclusive<usize>> for BitSlice<T, O>where
O: BitOrder,
T: BitStore,
impl<T, O> IndexMut<RangeInclusive<usize>> for BitSlice<T, O>where
O: BitOrder,
T: BitStore,
§fn index_mut(
&mut self,
index: RangeInclusive<usize>
) -> &mut <BitSlice<T, O> as Index<RangeInclusive<usize>>>::Output ⓘ
fn index_mut( &mut self, index: RangeInclusive<usize> ) -> &mut <BitSlice<T, O> as Index<RangeInclusive<usize>>>::Output ⓘ
container[index]
) operation. Read more§impl<T, O> IndexMut<RangeToInclusive<usize>> for BitSlice<T, O>where
O: BitOrder,
T: BitStore,
impl<T, O> IndexMut<RangeToInclusive<usize>> for BitSlice<T, O>where
O: BitOrder,
T: BitStore,
§fn index_mut(
&mut self,
index: RangeToInclusive<usize>
) -> &mut <BitSlice<T, O> as Index<RangeToInclusive<usize>>>::Output ⓘ
fn index_mut( &mut self, index: RangeToInclusive<usize> ) -> &mut <BitSlice<T, O> as Index<RangeToInclusive<usize>>>::Output ⓘ
container[index]
) operation. Read more§impl<T, O> LowerHex for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> LowerHex for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
§Bit-Slice Rendering
This implementation prints the contents of a &BitSlice
in one of binary,
octal, or hexadecimal. It is important to note that this does not render the
raw underlying memory! They render the semantically-ordered contents of the
bit-slice as numerals. This distinction matters if you use type parameters that
differ from those presumed by your debugger (which is usually <u8, Msb0>
).
The output separates the T
elements as individual list items, and renders each
element as a base- 2, 8, or 16 numeric string. When walking an element, the bits
traversed by the bit-slice are considered to be stored in
most-significant-bit-first ordering. This means that index [0]
is the high bit
of the left-most digit, and index [n]
is the low bit of the right-most digit,
in a given printed word.
In order to render according to expectations of the Arabic numeral system, an
element being transcribed is chunked into digits from the least-significant end
of its rendered form. This is most noticeable in octal, which will always have a
smaller ceiling on the left-most digit in a printed word, while the right-most
digit in that word is able to use the full 0 ..= 7
numeral range.
§Examples
use bitvec::prelude::*;
let data = [
0b000000_10u8,
// digits print LTR
0b10_001_101,
// significance is computed RTL
0b01_000000,
];
let bits = &data.view_bits::<Msb0>()[6 .. 18];
assert_eq!(format!("{:b}", bits), "[10, 10001101, 01]");
assert_eq!(format!("{:o}", bits), "[2, 215, 1]");
assert_eq!(format!("{:X}", bits), "[2, 8D, 1]");
The {:#}
format modifier causes the standard 0b
, 0o
, or 0x
prefix to be
applied to each printed word. The other format specifiers are not interpreted by
this implementation, and apply to the entire rendered text, not to individual
words.
§impl<T, O> Octal for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> Octal for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
§Bit-Slice Rendering
This implementation prints the contents of a &BitSlice
in one of binary,
octal, or hexadecimal. It is important to note that this does not render the
raw underlying memory! They render the semantically-ordered contents of the
bit-slice as numerals. This distinction matters if you use type parameters that
differ from those presumed by your debugger (which is usually <u8, Msb0>
).
The output separates the T
elements as individual list items, and renders each
element as a base- 2, 8, or 16 numeric string. When walking an element, the bits
traversed by the bit-slice are considered to be stored in
most-significant-bit-first ordering. This means that index [0]
is the high bit
of the left-most digit, and index [n]
is the low bit of the right-most digit,
in a given printed word.
In order to render according to expectations of the Arabic numeral system, an
element being transcribed is chunked into digits from the least-significant end
of its rendered form. This is most noticeable in octal, which will always have a
smaller ceiling on the left-most digit in a printed word, while the right-most
digit in that word is able to use the full 0 ..= 7
numeral range.
§Examples
use bitvec::prelude::*;
let data = [
0b000000_10u8,
// digits print LTR
0b10_001_101,
// significance is computed RTL
0b01_000000,
];
let bits = &data.view_bits::<Msb0>()[6 .. 18];
assert_eq!(format!("{:b}", bits), "[10, 10001101, 01]");
assert_eq!(format!("{:o}", bits), "[2, 215, 1]");
assert_eq!(format!("{:X}", bits), "[2, 8D, 1]");
The {:#}
format modifier causes the standard 0b
, 0o
, or 0x
prefix to be
applied to each printed word. The other format specifiers are not interpreted by
this implementation, and apply to the entire rendered text, not to individual
words.
§impl<T1, T2, O1, O2> PartialEq<&BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> PartialEq<&BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
§impl<T1, T2, O1, O2> PartialEq<&mut BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> PartialEq<&mut BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
§impl<O1, A, O2, T> PartialEq<BitArray<A, O2>> for BitSlice<T, O1>where
O1: BitOrder,
O2: BitOrder,
A: BitViewSized,
T: BitStore,
impl<O1, A, O2, T> PartialEq<BitArray<A, O2>> for BitSlice<T, O1>where
O1: BitOrder,
O2: BitOrder,
A: BitViewSized,
T: BitStore,
§impl<O1, O2, T1, T2> PartialEq<BitBox<T2, O2>> for BitSlice<T1, O1>where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
impl<O1, O2, T1, T2> PartialEq<BitBox<T2, O2>> for BitSlice<T1, O1>where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
§impl<T1, T2, O1, O2> PartialEq<BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> PartialEq<BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
Tests if two BitSlice
s are semantically — not representationally — equal.
It is valid to compare slices of different ordering or memory types.
The equality condition requires that they have the same length and that at each index, the two slices have the same bit value.
§impl<T1, T2, O1, O2> PartialEq<BitVec<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> PartialEq<BitVec<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
§impl<T1, T2, O1, O2> PartialOrd<&BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> PartialOrd<&BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
§fn partial_cmp(&self, rhs: &&BitSlice<T2, O2>) -> Option<Ordering>
fn partial_cmp(&self, rhs: &&BitSlice<T2, O2>) -> Option<Ordering>
1.0.0 · source§fn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
self
and other
) and is used by the <=
operator. Read more§impl<T1, T2, O1, O2> PartialOrd<&mut BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> PartialOrd<&mut BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
§fn partial_cmp(&self, rhs: &&mut BitSlice<T2, O2>) -> Option<Ordering>
fn partial_cmp(&self, rhs: &&mut BitSlice<T2, O2>) -> Option<Ordering>
1.0.0 · source§fn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
self
and other
) and is used by the <=
operator. Read more§impl<A, T, O> PartialOrd<BitArray<A, O>> for BitSlice<T, O>where
A: BitViewSized,
T: BitStore,
O: BitOrder,
impl<A, T, O> PartialOrd<BitArray<A, O>> for BitSlice<T, O>where
A: BitViewSized,
T: BitStore,
O: BitOrder,
§fn partial_cmp(&self, other: &BitArray<A, O>) -> Option<Ordering>
fn partial_cmp(&self, other: &BitArray<A, O>) -> Option<Ordering>
1.0.0 · source§fn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
self
and other
) and is used by the <=
operator. Read more§impl<O1, O2, T1, T2> PartialOrd<BitBox<T2, O2>> for BitSlice<T1, O1>where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
impl<O1, O2, T1, T2> PartialOrd<BitBox<T2, O2>> for BitSlice<T1, O1>where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
§fn partial_cmp(&self, other: &BitBox<T2, O2>) -> Option<Ordering>
fn partial_cmp(&self, other: &BitBox<T2, O2>) -> Option<Ordering>
1.0.0 · source§fn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
self
and other
) and is used by the <=
operator. Read more§impl<T1, T2, O1, O2> PartialOrd<BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> PartialOrd<BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
Compares two BitSlice
s by semantic — not representational — ordering.
The comparison sorts by testing at each index if one slice has a high bit where the other has a low. At the first index where the slices differ, the slice with the high bit is greater. If the slices are equal until at least one terminates, then they are compared by length.
§fn partial_cmp(&self, rhs: &BitSlice<T2, O2>) -> Option<Ordering>
fn partial_cmp(&self, rhs: &BitSlice<T2, O2>) -> Option<Ordering>
1.0.0 · source§fn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
self
and other
) and is used by the <=
operator. Read more§impl<T1, T2, O1, O2> PartialOrd<BitVec<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> PartialOrd<BitVec<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
§fn partial_cmp(&self, other: &BitVec<T2, O2>) -> Option<Ordering>
fn partial_cmp(&self, other: &BitVec<T2, O2>) -> Option<Ordering>
1.0.0 · source§fn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
self
and other
) and is used by the <=
operator. Read more§impl<T, O> Serialize for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
<T as BitStore>::Mem: Serialize,
impl<T, O> Serialize for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
<T as BitStore>::Mem: Serialize,
§fn serialize<S>(
&self,
serializer: S
) -> Result<<S as Serializer>::Ok, <S as Serializer>::Error>where
S: Serializer,
fn serialize<S>(
&self,
serializer: S
) -> Result<<S as Serializer>::Ok, <S as Serializer>::Error>where
S: Serializer,
§impl<T, O> ToOwned for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> ToOwned for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
§impl<T, O> UpperHex for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> UpperHex for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
§Bit-Slice Rendering
This implementation prints the contents of a &BitSlice
in one of binary,
octal, or hexadecimal. It is important to note that this does not render the
raw underlying memory! They render the semantically-ordered contents of the
bit-slice as numerals. This distinction matters if you use type parameters that
differ from those presumed by your debugger (which is usually <u8, Msb0>
).
The output separates the T
elements as individual list items, and renders each
element as a base- 2, 8, or 16 numeric string. When walking an element, the bits
traversed by the bit-slice are considered to be stored in
most-significant-bit-first ordering. This means that index [0]
is the high bit
of the left-most digit, and index [n]
is the low bit of the right-most digit,
in a given printed word.
In order to render according to expectations of the Arabic numeral system, an
element being transcribed is chunked into digits from the least-significant end
of its rendered form. This is most noticeable in octal, which will always have a
smaller ceiling on the left-most digit in a printed word, while the right-most
digit in that word is able to use the full 0 ..= 7
numeral range.
§Examples
use bitvec::prelude::*;
let data = [
0b000000_10u8,
// digits print LTR
0b10_001_101,
// significance is computed RTL
0b01_000000,
];
let bits = &data.view_bits::<Msb0>()[6 .. 18];
assert_eq!(format!("{:b}", bits), "[10, 10001101, 01]");
assert_eq!(format!("{:o}", bits), "[2, 215, 1]");
assert_eq!(format!("{:X}", bits), "[2, 8D, 1]");
The {:#}
format modifier causes the standard 0b
, 0o
, or 0x
prefix to be
applied to each printed word. The other format specifiers are not interpreted by
this implementation, and apply to the entire rendered text, not to individual
words.
impl<T, O> Eq for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> Send for BitSlice<T, O>where
T: BitStore + Sync,
O: BitOrder,
§Bit-Slice Thread Safety
This allows bit-slice references to be moved across thread boundaries only when
the underlying T
element can tolerate concurrency.
All BitSlice
references, shared or exclusive, are only threadsafe if the T
element type is Send
, because any given bit-slice reference may only have
partial control of a memory element that is also being shared by a bit-slice
reference on another thread. As such, this is never implemented for Cell<U>
,
but always implemented for AtomicU
and U
for a given unsigned integer type
U
.
Atomic integers safely handle concurrent writes, cells do not allow concurrency
at all, so the only missing piece is &mut BitSlice<_, U: Unsigned>
. This is
handled by the aliasing system that the mutable splitters employ: a mutable
reference to an unsynchronized bit-slice can only cross threads when no other
handle is able to exist to the elements it governs. Splitting a mutable
bit-slice causes the split halves to change over to either atomics or cells, so
concurrency is either safe or impossible.
impl<T, O> Sync for BitSlice<T, O>where
T: BitStore + Sync,
O: BitOrder,
§Bit-Slice Thread Safety
This allows bit-slice references to be moved across thread boundaries only when
the underlying T
element can tolerate concurrency.
All BitSlice
references, shared or exclusive, are only threadsafe if the T
element type is Send
, because any given bit-slice reference may only have
partial control of a memory element that is also being shared by a bit-slice
reference on another thread. As such, this is never implemented for Cell<U>
,
but always implemented for AtomicU
and U
for a given unsigned integer type
U
.
Atomic integers safely handle concurrent writes, cells do not allow concurrency
at all, so the only missing piece is &mut BitSlice<_, U: Unsigned>
. This is
handled by the aliasing system that the mutable splitters employ: a mutable
reference to an unsynchronized bit-slice can only cross threads when no other
handle is able to exist to the elements it governs. Splitting a mutable
bit-slice causes the split halves to change over to either atomics or cells, so
concurrency is either safe or impossible.