Flutter Engine
The Flutter Engine
Loading...
Searching...
No Matches
Classes | Public Types | Public Member Functions | Static Public Member Functions | Static Public Attributes | Friends | List of all members
SkBlockAllocator Class Referencefinal

#include <SkBlockAllocator.h>

Inheritance diagram for SkBlockAllocator:
SkNoncopyable

Classes

class  Block
 
class  BlockIter
 
struct  ByteRange
 

Public Types

enum class  GrowthPolicy : int {
  kFixed , kLinear , kFibonacci , kExponential ,
  kLast = kExponential
}
 
enum  ReserveFlags : unsigned { kIgnoreGrowthPolicy_Flag = 0b01 , kIgnoreExistingBytes_Flag = 0b10 , kNo_ReserveFlags = 0b00 }
 

Public Member Functions

 SkBlockAllocator (GrowthPolicy policy, size_t blockIncrementBytes, size_t additionalPreallocBytes=0)
 
 ~SkBlockAllocator ()
 
void operator delete (void *p)
 
size_t totalSize () const
 
size_t totalUsableSpace () const
 
size_t totalSpaceInUse () const
 
size_t preallocSize () const
 
size_t preallocUsableSpace () const
 
int metadata () const
 
void setMetadata (int value)
 
template<size_t Align, size_t Padding = 0>
ByteRange allocate (size_t size)
 
template<size_t Align = 1, size_t Padding = 0>
void reserve (size_t size, ReserveFlags flags=kNo_ReserveFlags)
 
const BlockcurrentBlock () const
 
BlockcurrentBlock ()
 
const BlockheadBlock () const
 
BlockheadBlock ()
 
template<size_t Align, size_t Padding = 0>
BlockowningBlock (const void *ptr, int start)
 
template<size_t Align, size_t Padding = 0>
const BlockowningBlock (const void *ptr, int start) const
 
BlockfindOwningBlock (const void *ptr)
 
const BlockfindOwningBlock (const void *ptr) const
 
void releaseBlock (Block *block)
 
void stealHeapBlocks (SkBlockAllocator *other)
 
void reset ()
 
void resetScratchSpace ()
 
BlockIter< true, false > blocks ()
 
BlockIter< true, true > blocks () const
 
BlockIter< false, false > rblocks ()
 
BlockIter< false, true > rblocks () const
 

Static Public Member Functions

template<size_t Align = 1, size_t Padding = 0>
static constexpr size_t BlockOverhead ()
 
template<size_t Align = 1, size_t Padding = 0>
static constexpr size_t Overhead ()
 

Static Public Attributes

static constexpr int kMaxAllocationSize = 1 << 29
 
static constexpr int kGrowthPolicyCount = static_cast<int>(GrowthPolicy::kLast) + 1
 

Friends

class BlockAllocatorTestAccess
 
class TBlockListTestAccess
 

Detailed Description

SkBlockAllocator provides low-level support for a block allocated arena with a dynamic tail that tracks space reservations within each block. Its APIs provide the ability to reserve space, resize reservations, and release reservations. It will automatically create new blocks if needed and destroy all remaining blocks when it is destructed. It assumes that anything allocated within its blocks has its destructors called externally. It is recommended that SkBlockAllocator is wrapped by a higher-level allocator that uses the low-level APIs to implement a simpler, purpose-focused API w/o having to worry as much about byte-level concerns.

SkBlockAllocator has no limit to its total size, but each allocation is limited to 512MB (which should be sufficient for Skia's use cases). This upper allocation limit allows all internal operations to be performed using 'int' and avoid many overflow checks. Static asserts are used to ensure that those operations would not overflow when using the largest possible values.

Possible use modes:

  1. No upfront allocation, either on the stack or as a field SkBlockAllocator allocator(policy, heapAllocSize);
  2. In-place new'd void* mem = operator new(totalSize); SkBlockAllocator* allocator = new (mem) SkBlockAllocator(policy, heapAllocSize, totalSize- sizeof(SkBlockAllocator)); delete allocator;
  3. Use SkSBlockAllocator to increase the preallocation size SkSBlockAllocator<1024> allocator(policy, heapAllocSize); sizeof(allocator) == 1024;

Definition at line 56 of file SkBlockAllocator.h.

Member Enumeration Documentation

◆ GrowthPolicy

enum class SkBlockAllocator::GrowthPolicy : int
strong
Enumerator
kFixed 
kLinear 
kFibonacci 
kExponential 
kLast 

Definition at line 62 of file SkBlockAllocator.h.

62 : int {
63 kFixed, // Next block size = N
64 kLinear, // = #blocks * N
65 kFibonacci, // = fibonacci(#blocks) * N
66 kExponential, // = 2^#blocks * N
68 };

◆ ReserveFlags

Enumerator
kIgnoreGrowthPolicy_Flag 
kIgnoreExistingBytes_Flag 
kNo_ReserveFlags 

Definition at line 285 of file SkBlockAllocator.h.

285 : unsigned {
286 // If provided to reserve(), the input 'size' will be rounded up to the next size determined
287 // by the growth policy of the SkBlockAllocator. If not, 'size' will be aligned to max_align
289 // If provided to reserve(), the number of available bytes of the current block will not
290 // be used to satisfy the reservation (assuming the contiguous range was long enough to
291 // begin with).
293
294 kNo_ReserveFlags = 0b00
295 };

Constructor & Destructor Documentation

◆ SkBlockAllocator()

SkBlockAllocator::SkBlockAllocator ( GrowthPolicy  policy,
size_t  blockIncrementBytes,
size_t  additionalPreallocBytes = 0 
)

Definition at line 17 of file SkBlockAllocator.cpp.

19 : fTail(&fHead)
20 // Round up to the nearest max-aligned value, and then divide so that fBlockSizeIncrement
21 // can effectively fit higher byte counts in its 16 bits of storage
22 , fBlockIncrement(SkTo<uint16_t>(
23 std::min(SkAlignTo(blockIncrementBytes, kAddressAlign) / kAddressAlign,
24 (size_t) std::numeric_limits<uint16_t>::max())))
25 , fGrowthPolicy(static_cast<uint64_t>(policy))
26 , fN0((policy == GrowthPolicy::kLinear || policy == GrowthPolicy::kExponential) ? 1 : 0)
27 , fN1(1)
28 // The head block always fills remaining space from SkBlockAllocator's size, because it's
29 // inline, but can take over the specified number of bytes immediately after it.
30 , fHead(/*prev=*/nullptr, additionalPreallocBytes + BaseHeadBlockSize()) {
31 SkASSERT(fBlockIncrement >= 1);
32 SkASSERT(additionalPreallocBytes <= kMaxAllocationSize);
33}
static constexpr size_t SkAlignTo(size_t x, size_t alignment)
Definition SkAlign.h:33
#define SkASSERT(cond)
Definition SkAssert.h:116
static constexpr int kMaxAllocationSize

◆ ~SkBlockAllocator()

SkBlockAllocator::~SkBlockAllocator ( )
inline

Definition at line 187 of file SkBlockAllocator.h.

187{ this->reset(); }

Member Function Documentation

◆ allocate()

template<size_t Align, size_t Padding>
SkBlockAllocator::ByteRange SkBlockAllocator::allocate ( size_t  size)

Reserve space that will hold 'size' bytes. This will automatically allocate a new block if there is not enough available space in the current block to provide 'size' bytes. The returned ByteRange tuple specifies the Block owning the reserved memory, the full byte range, and the aligned offset within that range to use for the user-facing pointer. The following invariants hold:

  1. block->ptr(alignedOffset) is aligned to Align
  2. end - alignedOffset == size
  3. Padding <= alignedOffset - start <= Padding + Align - 1

Invariant #3, when Padding > 0, allows intermediate allocators to embed metadata along with the allocations. If the Padding bytes are used for some 'struct Meta', then ptr(alignedOffset - sizeof(Meta)) can be safely used as a Meta* if Meta's alignment requirements are less than or equal to the alignment specified in allocate<>. This can be easily guaranteed by using the pattern:

allocate<max(UserAlign, alignof(Meta)), sizeof(Meta)>(userSize);

This ensures that ptr(alignedOffset) will always satisfy UserAlign and ptr(alignedOffset - sizeof(Meta)) will always satisfy alignof(Meta). Alternatively, memcpy can be used to read and write values between start and alignedOffset without worrying about alignment requirements of the metadata.

For over-aligned allocations, the alignedOffset (as an int) may not be a multiple of Align, but the result of ptr(alignedOffset) will be a multiple of Align.

Definition at line 566 of file SkBlockAllocator.h.

566 {
567 // Amount of extra space for a new block to make sure the allocation can succeed.
568 static constexpr int kBlockOverhead = (int) BlockOverhead<Align, Padding>();
569
570 // Ensures 'offset' and 'end' calculations will be valid
571 static_assert((kMaxAllocationSize + SkAlignTo(MaxBlockSize<Align, Padding>(), Align))
572 <= (size_t) std::numeric_limits<int32_t>::max());
573 // Ensures size + blockOverhead + addBlock's alignment operations will be valid
574 static_assert(kMaxAllocationSize + kBlockOverhead + ((1 << 12) - 1) // 4K align for large blocks
575 <= std::numeric_limits<int32_t>::max());
576
577 if (size > kMaxAllocationSize) {
578 SK_ABORT("Allocation too large (%zu bytes requested)", size);
579 }
580
581 int iSize = (int) size;
582 int offset = fTail->cursor<Align, Padding>();
583 int end = offset + iSize;
584 if (end > fTail->fSize) {
585 this->addBlock(iSize + kBlockOverhead, MaxBlockSize<Align, Padding>());
586 offset = fTail->cursor<Align, Padding>();
587 end = offset + iSize;
588 }
589
590 // Check invariants
591 SkASSERT(end <= fTail->fSize);
592 SkASSERT(end - offset == iSize);
593 SkASSERT(offset - fTail->fCursor >= (int) Padding &&
594 offset - fTail->fCursor <= (int) (Padding + Align - 1));
595 SkASSERT(reinterpret_cast<uintptr_t>(fTail->ptr(offset)) % Align == 0);
596
597 int start = fTail->fCursor;
598 fTail->fCursor = end;
599
600 fTail->unpoisonRange(offset - Padding, end);
601
602 return {fTail, start, offset, end};
603}
Align
#define SK_ABORT(message,...)
Definition SkAssert.h:70
Type::kYUV Type::kRGBA() int(0.7 *637)
void * ptr(int offset)
glong glong end
Point offset

◆ BlockOverhead()

template<size_t Align, size_t Padding>
constexpr size_t SkBlockAllocator::BlockOverhead ( )
staticconstexpr

Helper to calculate the minimum number of bytes needed for heap block size, under the assumption that Align will be the requested alignment of the first call to allocate(). Ex. To store N instances of T in a heap block, the 'blockIncrementBytes' should be set to BlockOverhead<alignof(T)>() + N * sizeof(T) when making the SkBlockAllocator.

Definition at line 521 of file SkBlockAllocator.h.

521 {
522 static_assert(SkAlignTo(kDataStart + Padding, Align) >= sizeof(Block));
523 return SkAlignTo(kDataStart + Padding, Align);
524}

◆ blocks() [1/2]

SkBlockAllocator::BlockIter< true, false > SkBlockAllocator::blocks ( )
inline

Clients can iterate over all active Blocks in the SkBlockAllocator using for loops:

Forward iteration from head to tail block (or non-const variant): for (const Block* b : this->blocks()) { } Reverse iteration from tail to head block: for (const Block* b : this->rblocks()) { }

It is safe to call releaseBlock() on the active block while looping.

Definition at line 741 of file SkBlockAllocator.h.

741 {
742 return BlockIter<true, false>(this);
743}

◆ blocks() [2/2]

SkBlockAllocator::BlockIter< true, true > SkBlockAllocator::blocks ( ) const
inline

Definition at line 744 of file SkBlockAllocator.h.

744 {
745 return BlockIter<true, true>(this);
746}

◆ currentBlock() [1/2]

Block * SkBlockAllocator::currentBlock ( )
inline

Definition at line 316 of file SkBlockAllocator.h.

316{ return fTail; }

◆ currentBlock() [2/2]

const Block * SkBlockAllocator::currentBlock ( ) const
inline

Return a pointer to the start of the current block. This will never be null.

Definition at line 315 of file SkBlockAllocator.h.

315{ return fTail; }

◆ findOwningBlock() [1/2]

SkBlockAllocator::Block * SkBlockAllocator::findOwningBlock ( const void *  ptr)

Find the owning block of the allocated pointer, 'p'. Without any additional information this is O(N) on the number of allocated blocks.

Definition at line 86 of file SkBlockAllocator.cpp.

86 {
87 // When in doubt, search in reverse to find an overlapping block.
88 uintptr_t ptr = reinterpret_cast<uintptr_t>(p);
89 for (Block* b : this->rblocks()) {
90 uintptr_t lowerBound = reinterpret_cast<uintptr_t>(b) + kDataStart;
91 uintptr_t upperBound = reinterpret_cast<uintptr_t>(b) + b->fSize;
92 if (lowerBound <= ptr && ptr < upperBound) {
93 SkASSERT(b->fSentinel == kAssignedMarker);
94 return b;
95 }
96 }
97 return nullptr;
98}
BlockIter< false, false > rblocks()
static bool b

◆ findOwningBlock() [2/2]

const Block * SkBlockAllocator::findOwningBlock ( const void *  ptr) const
inline

Definition at line 346 of file SkBlockAllocator.h.

346 {
347 return const_cast<SkBlockAllocator*>(this)->findOwningBlock(ptr);
348 }
Block * findOwningBlock(const void *ptr)

◆ headBlock() [1/2]

Block * SkBlockAllocator::headBlock ( )
inline

Definition at line 319 of file SkBlockAllocator.h.

319{ return &fHead; }

◆ headBlock() [2/2]

const Block * SkBlockAllocator::headBlock ( ) const
inline

Definition at line 318 of file SkBlockAllocator.h.

318{ return &fHead; }

◆ metadata()

int SkBlockAllocator::metadata ( ) const
inline

Get the current value of the allocator-level metadata (a user-oriented slot). This is separate from any block-level metadata, but can serve a similar purpose to compactly support data collections on top of SkBlockAllocator.

Definition at line 248 of file SkBlockAllocator.h.

248{ return fHead.fAllocatorMetadata; }

◆ operator delete()

void SkBlockAllocator::operator delete ( void *  p)
inline

Definition at line 188 of file SkBlockAllocator.h.

188{ ::operator delete(p); }

◆ Overhead()

template<size_t Align, size_t Padding>
constexpr size_t SkBlockAllocator::Overhead ( )
staticconstexpr

Helper to calculate the minimum number of bytes needed for a preallocation, under the assumption that Align will be the requested alignment of the first call to allocate(). Ex. To preallocate a SkSBlockAllocator to hold N instances of T, its arge should be Overhead<alignof(T)>() + N * sizeof(T)

Definition at line 527 of file SkBlockAllocator.h.

527 {
528 // NOTE: On most platforms, SkBlockAllocator is packed; this is not the case on debug builds
529 // due to extra fields, or on WASM due to 4byte pointers but 16byte max align.
530 return std::max(sizeof(SkBlockAllocator),
531 offsetof(SkBlockAllocator, fHead) + BlockOverhead<Align, Padding>());
532}

◆ owningBlock() [1/2]

template<size_t Align, size_t Padding>
SkBlockAllocator::Block * SkBlockAllocator::owningBlock ( const void *  ptr,
int  start 
)

Return the block that owns the allocated 'ptr'. Assuming that earlier, an allocation was returned as {b, start, alignedOffset, end}, and 'p = b->ptr(alignedOffset)', then a call to 'owningBlock<Align, Padding>(p, start) == b'.

If calling code has already made a pointer to their metadata, i.e. 'm = p - Padding', then 'owningBlock<Align, 0>(m, start)' will also return b, allowing you to recover the block from the metadata pointer.

If calling code has access to the original alignedOffset, this function should not be used since the owning block is just 'p - alignedOffset', regardless of original Align or Padding.

Definition at line 606 of file SkBlockAllocator.h.

606 {
607 // 'p' was originally formed by aligning 'block + start + Padding', producing the inequality:
608 // block + start + Padding <= p <= block + start + Padding + Align-1
609 // Rearranging this yields:
610 // block <= p - start - Padding <= block + Align-1
611 // Masking these terms by ~(Align-1) reconstructs 'block' if the alignment of the block is
612 // greater than or equal to Align (since block & ~(Align-1) == (block + Align-1) & ~(Align-1)
613 // in that case). Overalignment does not reduce to inequality unfortunately.
614 if /* constexpr */ (Align <= kAddressAlign) {
615 Block* block = reinterpret_cast<Block*>(
616 (reinterpret_cast<uintptr_t>(p) - start - Padding) & ~(Align - 1));
617 SkASSERT(block->fSentinel == kAssignedMarker);
618 return block;
619 } else {
620 // There's not a constant-time expression available to reconstruct the block from 'p',
621 // but this is unlikely to happen frequently.
622 return this->findOwningBlock(p);
623 }
624}

◆ owningBlock() [2/2]

template<size_t Align, size_t Padding = 0>
const Block * SkBlockAllocator::owningBlock ( const void *  ptr,
int  start 
) const
inline

Definition at line 337 of file SkBlockAllocator.h.

337 {
338 return const_cast<SkBlockAllocator*>(this)->owningBlock<Align, Padding>(ptr, start);
339 }

◆ preallocSize()

size_t SkBlockAllocator::preallocSize ( ) const
inline

Return the total number of bytes that were pre-allocated for the SkBlockAllocator. This will include 'additionalPreallocBytes' passed to the constructor, and represents what the total size would become after a call to reset().

Definition at line 230 of file SkBlockAllocator.h.

230 {
231 // Don't double count fHead's Block overhead in both sizeof(SkBlockAllocator) and fSize.
232 return sizeof(SkBlockAllocator) + fHead.fSize - BaseHeadBlockSize();
233 }

◆ preallocUsableSpace()

size_t SkBlockAllocator::preallocUsableSpace ( ) const
inline

Return the usable size of the inline head block; this will be equal to 'additionalPreallocBytes' plus any alignment padding that the system had to add to Block. The returned value represents what could be allocated before a heap block is be created.

Definition at line 239 of file SkBlockAllocator.h.

239 {
240 return fHead.fSize - kDataStart;
241 }

◆ rblocks() [1/2]

SkBlockAllocator::BlockIter< false, false > SkBlockAllocator::rblocks ( )
inline

Definition at line 747 of file SkBlockAllocator.h.

747 {
748 return BlockIter<false, false>(this);
749}

◆ rblocks() [2/2]

SkBlockAllocator::BlockIter< false, true > SkBlockAllocator::rblocks ( ) const
inline

Definition at line 750 of file SkBlockAllocator.h.

750 {
751 return BlockIter<false, true>(this);
752}

◆ releaseBlock()

void SkBlockAllocator::releaseBlock ( Block block)

Explicitly free an entire block, invalidating any remaining allocations from the block. SkBlockAllocator will release all alive blocks automatically when it is destroyed, but this function can be used to reclaim memory over the lifetime of the allocator. The provided 'block' pointer must have previously come from a call to currentBlock() or allocate().

If 'block' represents the inline-allocated head block, its cursor and metadata are instead reset to their defaults.

If the block is not the head block, it may be kept as a scratch block to be reused for subsequent allocation requests, instead of making an entirely new block. A scratch block is not visible when iterating over blocks but is reported in the total size of the allocator.

Definition at line 100 of file SkBlockAllocator.cpp.

100 {
101 if (block == &fHead) {
102 // Reset the cursor of the head block so that it can be reused if it becomes the new tail
103 block->fCursor = kDataStart;
104 block->fMetadata = 0;
105 block->poisonRange(kDataStart, block->fSize);
106 // Unlike in reset(), we don't set the head's next block to null because there are
107 // potentially heap-allocated blocks that are still connected to it.
108 } else {
109 SkASSERT(block->fPrev);
110 block->fPrev->fNext = block->fNext;
111 if (block->fNext) {
112 SkASSERT(fTail != block);
113 block->fNext->fPrev = block->fPrev;
114 } else {
115 SkASSERT(fTail == block);
116 fTail = block->fPrev;
117 }
118
119 // The released block becomes the new scratch block (if it's bigger), or delete it
120 if (this->scratchBlockSize() < block->fSize) {
121 SkASSERT(block != fHead.fPrev); // shouldn't already be the scratch block
122 if (fHead.fPrev) {
123 delete fHead.fPrev;
124 }
125 block->markAsScratch();
126 fHead.fPrev = block;
127 } else {
128 delete block;
129 }
130 }
131
132 // Decrement growth policy (opposite of addBlock()'s increment operations)
133 GrowthPolicy gp = static_cast<GrowthPolicy>(fGrowthPolicy);
134 if (fN0 > 0 && (fN1 > 1 || gp == GrowthPolicy::kFibonacci)) {
135 SkASSERT(gp != GrowthPolicy::kFixed); // fixed never needs undoing, fN0 always is 0
136 if (gp == GrowthPolicy::kLinear) {
137 fN1 = fN1 - fN0;
138 } else if (gp == GrowthPolicy::kFibonacci) {
139 // Subtract n0 from n1 to get the prior 2 terms in the fibonacci sequence
140 int temp = fN1 - fN0; // yields prior fN0
141 fN1 = fN1 - temp; // yields prior fN1
142 fN0 = temp;
143 } else {
144 SkASSERT(gp == GrowthPolicy::kExponential);
145 // Divide by 2 to undo the 2N update from addBlock
146 fN1 = fN1 >> 1;
147 fN0 = fN1;
148 }
149 }
150
151 SkASSERT(fN1 >= 1 && fN0 >= 0);
152}

◆ reserve()

template<size_t Align, size_t Padding>
void SkBlockAllocator::reserve ( size_t  size,
ReserveFlags  flags = kNo_ReserveFlags 
)

Ensure the block allocator has 'size' contiguous available bytes. After calling this function, currentBlock()->avail<Align, Padding>() may still report less than 'size' if the reserved space was added as a scratch block. This is done so that anything remaining in the current block can still be used if a smaller-than-size allocation is requested. If 'size' is requested by a subsequent allocation, the scratch block will automatically be activated and the request will not itself trigger any malloc.

The optional 'flags' controls how the input size is allocated; by default it will attempt to use available contiguous bytes in the current block and will respect the growth policy of the allocator.

Definition at line 543 of file SkBlockAllocator.h.

543 {
544 if (size > kMaxAllocationSize) {
545 SK_ABORT("Allocation too large (%zu bytes requested)", size);
546 }
547 int iSize = (int) size;
549 this->currentBlock()->avail<Align, Padding>() < iSize) {
550
551 int blockSize = BlockOverhead<Align, Padding>() + iSize;
552 int maxSize = (flags & kIgnoreGrowthPolicy_Flag) ? blockSize
553 : MaxBlockSize<Align, Padding>();
554 SkASSERT((size_t) maxSize <= (MaxBlockSize<Align, Padding>()));
555
556 SkDEBUGCODE(auto oldTail = fTail;)
557 this->addBlock(blockSize, maxSize);
558 SkASSERT(fTail != oldTail);
559 // Releasing the just added block will move it into scratch space, allowing the original
560 // tail's bytes to be used first before the scratch block is activated.
561 this->releaseBlock(fTail);
562 }
563}
#define SkDEBUGCODE(...)
Definition SkDebug.h:23
void releaseBlock(Block *block)
FlutterSemanticsFlag flags

◆ reset()

void SkBlockAllocator::reset ( )

Explicitly free all blocks (invalidating all allocations), and resets the head block to its default state. The allocator-level metadata is reset to 0 as well.

Definition at line 169 of file SkBlockAllocator.cpp.

169 {
170 for (Block* b : this->rblocks()) {
171 if (b == &fHead) {
172 // Reset metadata and cursor, tail points to the head block again
173 fTail = b;
174 b->fNext = nullptr;
175 b->fCursor = kDataStart;
176 b->fMetadata = 0;
177 // For reset(), but NOT releaseBlock(), the head allocatorMetadata and scratch block
178 // are reset/destroyed.
179 b->fAllocatorMetadata = 0;
180 b->poisonRange(kDataStart, b->fSize);
181 this->resetScratchSpace();
182 } else {
183 delete b;
184 }
185 }
186 SkASSERT(fTail == &fHead && fHead.fNext == nullptr && fHead.fPrev == nullptr &&
187 fHead.metadata() == 0 && fHead.fCursor == kDataStart);
188
189 GrowthPolicy gp = static_cast<GrowthPolicy>(fGrowthPolicy);
190 fN0 = (gp == GrowthPolicy::kLinear || gp == GrowthPolicy::kExponential) ? 1 : 0;
191 fN1 = 1;
192}

◆ resetScratchSpace()

void SkBlockAllocator::resetScratchSpace ( )

Remove any reserved scratch space, either from calling reserve() or releaseBlock().

Definition at line 194 of file SkBlockAllocator.cpp.

194 {
195 if (fHead.fPrev) {
196 delete fHead.fPrev;
197 fHead.fPrev = nullptr;
198 }
199}

◆ setMetadata()

void SkBlockAllocator::setMetadata ( int  value)
inline

Set the current value of the allocator-level metadata.

Definition at line 253 of file SkBlockAllocator.h.

253{ fHead.fAllocatorMetadata = value; }
uint8_t value

◆ stealHeapBlocks()

void SkBlockAllocator::stealHeapBlocks ( SkBlockAllocator other)

Detach every heap-allocated block owned by 'other' and concatenate them to this allocator's list of blocks. This memory is now managed by this allocator. Since this only transfers ownership of a Block, and a Block itself does not move, any previous allocations remain valid and associated with their original Block instances. SkBlockAllocator-level functions that accept allocated pointers (e.g. findOwningBlock), must now use this allocator and not 'other' for these allocations.

The head block of 'other' cannot be stolen, so higher-level allocators and memory structures must handle that data differently.

Definition at line 154 of file SkBlockAllocator.cpp.

154 {
155 Block* toSteal = other->fHead.fNext;
156 if (toSteal) {
157 // The other's next block connects back to this allocator's current tail, and its new tail
158 // becomes the end of other's block linked list.
159 SkASSERT(other->fTail != &other->fHead);
160 toSteal->fPrev = fTail;
161 fTail->fNext = toSteal;
162 fTail = other->fTail;
163 // The other allocator becomes just its inline head block
164 other->fTail = &other->fHead;
165 other->fHead.fNext = nullptr;
166 } // else no block to steal
167}

◆ totalSize()

size_t SkBlockAllocator::totalSize ( ) const

Return the total number of bytes of the allocator, including its instance overhead, per-block overhead and space used for allocations.

Definition at line 55 of file SkBlockAllocator.cpp.

55 {
56 // Use size_t since the sum across all blocks could exceed 'int', even though each block won't
57 size_t size = offsetof(SkBlockAllocator, fHead) + this->scratchBlockSize();
58 for (const Block* b : this->blocks()) {
59 size += b->fSize;
60 }
61 SkASSERT(size >= this->preallocSize());
62 return size;
63}
size_t preallocSize() const
BlockIter< true, false > blocks()
it will be possible to load the file into Perfetto s trace viewer disable asset Prevents usage of any non test fonts unless they were explicitly Loaded via prefetched default font Indicates whether the embedding started a prefetch of the default font manager before creating the engine run In non interactive keep the shell running after the Dart script has completed enable serial On low power devices with low core running concurrent GC tasks on threads can cause them to contend with the UI thread which could potentially lead to jank This option turns off all concurrent GC activities domain network JSON encoded network policy per domain This overrides the DisallowInsecureConnections switch Embedder can specify whether to allow or disallow insecure connections at a domain level old gen heap size
Definition switches.h:259

◆ totalSpaceInUse()

size_t SkBlockAllocator::totalSpaceInUse ( ) const

Return the total number of usable bytes that have been reserved by allocations. This will be less than or equal to totalUsableSpace().

Definition at line 77 of file SkBlockAllocator.cpp.

77 {
78 size_t size = 0;
79 for (const Block* b : this->blocks()) {
80 size += (b->fCursor - kDataStart);
81 }
82 SkASSERT(size <= this->totalUsableSpace());
83 return size;
84}
size_t totalUsableSpace() const

◆ totalUsableSpace()

size_t SkBlockAllocator::totalUsableSpace ( ) const

Return the total number of bytes usable for allocations. This includes bytes that have been reserved already by a call to allocate() and bytes that are still available. It is totalSize() minus all allocator and block-level overhead.

Definition at line 65 of file SkBlockAllocator.cpp.

65 {
66 size_t size = this->scratchBlockSize();
67 if (size > 0) {
68 size -= kDataStart; // scratchBlockSize reports total block size, not usable size
69 }
70 for (const Block* b : this->blocks()) {
71 size += (b->fSize - kDataStart);
72 }
73 SkASSERT(size >= this->preallocUsableSpace());
74 return size;
75}
size_t preallocUsableSpace() const

Friends And Related Symbol Documentation

◆ BlockAllocatorTestAccess

friend class BlockAllocatorTestAccess
friend

Definition at line 414 of file SkBlockAllocator.h.

◆ TBlockListTestAccess

friend class TBlockListTestAccess
friend

Definition at line 415 of file SkBlockAllocator.h.

Member Data Documentation

◆ kGrowthPolicyCount

constexpr int SkBlockAllocator::kGrowthPolicyCount = static_cast<int>(GrowthPolicy::kLast) + 1
inlinestaticconstexpr

Definition at line 69 of file SkBlockAllocator.h.

◆ kMaxAllocationSize

constexpr int SkBlockAllocator::kMaxAllocationSize = 1 << 29
inlinestaticconstexpr

Definition at line 60 of file SkBlockAllocator.h.


The documentation for this class was generated from the following files: