Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resolve some PeerDAS todos #6434

Open
wants to merge 1 commit into
base: unstable
Choose a base branch
from

Conversation

dapplion
Copy link
Collaborator

Issue Addressed

Clean-up some stale TODO(das) tags

Proposed Changes

Explained inlined in code

@dapplion dapplion added ready-for-review The code is ready for review das Data Availability Sampling labels Sep 25, 2024
Comment on lines -1078 to -1119
// TODO(das): How is a consumer of sampling results?
// - Fork-choice for trailing DA
// - Single lookups to complete import requirements
// - Range sync to complete import requirements? Can sampling for syncing lag behind and
// accumulate in fork-choice?

match requester {
SamplingRequester::ImportedBlock(block_root) => {
debug!(self.log, "Sampling result"; "block_root" => %block_root, "result" => ?result);

// TODO(das): Consider moving SamplingResult to the beacon_chain crate and import
// here. No need to add too much enum variants, just whatever the beacon_chain or
// fork-choice needs to make a decision. Currently the fork-choice only needs to
// be notified of successful samplings, i.e. sampling failures don't trigger pruning
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unnecessary comments

@@ -802,7 +801,6 @@ impl<T: BeaconChainTypes> SyncNetworkContext<T> {
self.custody_by_root_requests.insert(requester, request);
Ok(LookupRequestResult::RequestSent(req_id))
}
// TODO(das): handle this error properly
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This refers to the fact that custody request errors are "nested" but it's okay. So far they are okay to debug

@@ -22,7 +22,6 @@ pub type SamplingResult = Result<(), SamplingError>;
type DataColumnSidecarList<E> = Vec<Arc<DataColumnSidecar<E>>>;

pub struct Sampling<T: BeaconChainTypes> {
// TODO(das): stalled sampling request are never cleaned up
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Request can't go stale

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
das Data Availability Sampling ready-for-review The code is ready for review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant