delvingbitcoin

Cluster mempool definitions & theory

Cluster mempool definitions & theory

Posted on: December 10, 2023 18:20 UTC

The discussion revolves around the concept of optimal linearization in transaction processing and its implications on chunking.

When considering the topological validity of transactions, it's noted that validity is required only at the borders between chunks rather than within the chunks themselves. Chunks are defined as sets without an internal order. The trade-off between using sets or sequences for chunks is considered; while sets avoid the need for a specific intra-chunk ordering, transitioning from chunking to linearization is less intuitive.

The idea of using a feerate diagram with one convex hull as part of a geometric proof is proposed. This method would potentially demonstrate that all subgroupings of transactions fall below this convex hull. There's a suggestion that the approach used in the gathering theorem might apply here, although further contemplation is needed to solidify this approach.

A clearer way to order transactions within chunks has been formalized, determining the ordering before the function definition. Questions arise concerning the meaning of "connected components whose feerate is all the same" within the context of optimal linearization. An example is provided to illustrate the point: a cluster with two linearizations, [ACDBE] and [ABCDE], which both result in an equivalent and optimal chunking despite having disconnected components within a chunk.

Finally, a redefinition of very-optimality is suggested, which goes beyond the current definition of optimality. This new standard would not only require an optimal diagram but also stipulate that chunks must be connected and cannot contain any topologically-valid strict subsets with equal feerate. This aims to achieve a more robust structure for optimal linearizations by ensuring that every graph can achieve such a very-optimal state.