-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
channeldb+discovery: ensure we store, validate and propagate announcements with opaque data #1825
channeldb+discovery: ensure we store, validate and propagate announcements with opaque data #1825
Conversation
does not compile as is |
chan_series.go
Outdated
Features: n.Features.RawFeatureVector, | ||
RGBColor: n.Color, | ||
Alias: alias, | ||
ExtraOpauqeData: e1.ExtraOpauqeData, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Spelling is wrong: ExtraOpauqeData
should be ExtraOpaqueData
(here, and in all the places below)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed! ;)
length += 64 | ||
|
||
return length | ||
return 65533 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we never check this data in any way, I think this means nodes can start sending us large messages (up to MaxPayloadLength
) that we will happily store and forward. Should we instead set a soft limit on the max msg length we accept, with a reasonable buffer length for future added fields?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep for these classes of messages that we'll store. We can add a limiter later if/when it proves to be an issue, to my knowledge there' nothing on mainnet atm that's excessively padded. The other day I started to tool to traverse select buckets to compute the current disk utilization, something that we should start monitoring given the recent chan update index bug, and the fact that bolt by default won't automatically reclaim space.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like we already have a soft cap in this PR https://github.com/lightningnetwork/lnd/pull/1825/files#diff-777fae0a5ca9241da46f94633cbb242bR2832
Is this intentional?
I assume you need two ExtraOpaqueData blobs in node_announcement - one for extra address types you dont understand and one for extra data after addresses. Because of the sorting requirement you could almost eek by without it, but your re-encoded length descriptor for address data would be wrong. |
@TheBlueMatt yep this is just going after the generic case, a follow up will be made for the case of node announcements themselves. The generic case is a higher priority than the node ann case as atm, there're no "official" unknown node anns in the wild. |
7037ddf
to
856766a
Compare
Does not build as is. |
dd9fbf9
to
54c0965
Compare
channeldb/graph_test.go
Outdated
@@ -754,6 +763,9 @@ func TestEdgeInfoUpdates(t *testing.T) { | |||
if err := compareEdgePolicies(dbEdge2, edge2); err != nil { | |||
t.Fatalf("edge doesn't match: %v", err) | |||
} | |||
if err := compareEdgePolicies(dbEdge2, edge2); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
duplicate assertion?
channeldb/graph.go
Outdated
@@ -2674,6 +2698,11 @@ func putLightningNode(nodeBucket *bolt.Bucket, aliasBucket *bolt.Bucket, | |||
return err | |||
} | |||
|
|||
err = wire.WriteVarBytes(&b, 0, node.ExtraOpaqueData) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we not fail this write if len(node.ExtraOpaqueData) > 1000
? Seems like we could write bytes that we will refuse to deserialize due to constrained read using ReadVarBytes
below. Similar question to the other 2 calls to WriteVarBytes
in this commit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added!
channeldb/graph.go
Outdated
@@ -3023,7 +3023,7 @@ func putChanEdgePolicy(edges *bolt.Bucket, edge *ChannelEdgePolicy, from, to []b | |||
return err | |||
} | |||
|
|||
if _, err := b.Write(to); err != nil { | |||
if _, err := b.Write(to[:33]); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This fix is made in #1821, perhaps we should get that in first? Seems like we should correct all residual off-by-one bugs related to update length before tacking more data on to the end of edge policies. Spidy senses are goin off.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed that #1821 should land before this. I'll remove this commit, which will cause the build of this to fail in the next execution. I'll kill the build early though so we don't burn any Travis slots in the process.
54c0965
to
95ee1a5
Compare
Added a 10k byte limit, and also removed a commit which is fixed in another PR. As a result, until that PR is merged, this build will fail. |
In this commit, we add a new field to all the existing gossip messages: ExtraOpqueData. We do this, as before this commit, if we came across a ChannelUpdate message with a set of optional fields, then we wouldn't be able to properly parse the signatures related to the message. If we never corrected this behavior, then we would violate the forwards compatible principle we use when parsing existing messages. As these messages can now be padded out to the max message size, we've increased the MaxPayloadLength value for all of these messages. Fixes lightningnetwork#1814.
In this commit, we add a mirror set of fields to the ones we recently added to the set of gossip wire messages. With these set of fields in place, we ensure that we'll be able to properly store and re-validate gossip messages that contain a set of extra/optional fields.
In this commit, we add a new limit on the largest number of extra opaque bytes that we'll allow to be written per vertex/edge. We do this in order to limit the amount of disk space that we expose, as it's possible that nodes may start to pad their announcements adding an additional externalized cost as nodes may need to continue to store and relay these large announcements.
95ee1a5
to
48072f8
Compare
…ytes In this commit, we account for the additional case wherein the announcement hasn't yet been written with the extra zero byte to indicate that there aren't any remaining bytes to be read. Before this commit, we accounted for the case where the announcement was written with the extra byte, but now we ensure that legacy nodes that upgrade will be able to boot properly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
// opaque bytes. We limit this value in order to ensure that we don't waste | ||
// disk space due to nodes unnecessarily padding out their announcements with | ||
// garbage data. | ||
func ErrTooManyExtraOpaqueBytes(numBytes int) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👌
In this commit, we add a field to the ChannelUpdate denoting the maximum HTLC we support sending over this channel, a field which was recently added to the spec. This field serves multiple purposes. In the short term, it enables nodes to signal the largest HTLC they're willing to carry, allows light clients who don't verify channel existence to have some guidance when routing HTLCs, and finally may allow nodes to preserve a portion of bandwidth at all times. In the long term, this field can be used by implementations of AMP to guide payment splitting, as it becomes apparent to a node the largest possible HTLC one can route over a particular channel. This PR was made possible by the merge of lightningnetwork#1825, which enables older nodes to properly retain and verify signatures on updates that include new fields (like this new max HTLC field) that they haven't yet been updated to recognize. In addition, the new ChannelUpdate fields are added to the lnwire fuzzing tests.
In this commit, we add a field to the ChannelUpdate denoting the maximum HTLC we support sending over this channel, a field which was recently added to the spec. This field serves multiple purposes. In the short term, it enables nodes to signal the largest HTLC they're willing to carry, allows light clients who don't verify channel existence to have some guidance when routing HTLCs, and finally may allow nodes to preserve a portion of bandwidth at all times. In the long term, this field can be used by implementations of AMP to guide payment splitting, as it becomes apparent to a node the largest possible HTLC one can route over a particular channel. This PR was made possible by the merge of lightningnetwork#1825, which enables older nodes to properly retain and verify signatures on updates that include new fields (like this new max HTLC field) that they haven't yet been updated to recognize. In addition, the new ChannelUpdate fields are added to the lnwire fuzzing tests. Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit, we add a field to the ChannelUpdate denoting the maximum HTLC we support sending over this channel, a field which was recently added to the spec. This field serves multiple purposes. In the short term, it enables nodes to signal the largest HTLC they're willing to carry, allows light clients who don't verify channel existence to have some guidance when routing HTLCs, and finally may allow nodes to preserve a portion of bandwidth at all times. In the long term, this field can be used by implementations of AMP to guide payment splitting, as it becomes apparent to a node the largest possible HTLC one can route over a particular channel. This PR was made possible by the merge of lightningnetwork#1825, which enables older nodes to properly retain and verify signatures on updates that include new fields (like this new max HTLC field) that they haven't yet been updated to recognize. In addition, the new ChannelUpdate fields are added to the lnwire fuzzing tests. Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit, we add a field to the ChannelUpdate denoting the maximum HTLC we support sending over this channel, a field which was recently added to the spec. This field serves multiple purposes. In the short term, it enables nodes to signal the largest HTLC they're willing to carry, allows light clients who don't verify channel existence to have some guidance when routing HTLCs, and finally may allow nodes to preserve a portion of bandwidth at all times. In the long term, this field can be used by implementations of AMP to guide payment splitting, as it becomes apparent to a node the largest possible HTLC one can route over a particular channel. This PR was made possible by the merge of lightningnetwork#1825, which enables older nodes to properly retain and verify signatures on updates that include new fields (like this new max HTLC field) that they haven't yet been updated to recognize. In addition, the new ChannelUpdate fields are added to the lnwire fuzzing tests. Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit, we add a field to the ChannelUpdate denoting the maximum HTLC we support sending over this channel, a field which was recently added to the spec. This field serves multiple purposes. In the short term, it enables nodes to signal the largest HTLC they're willing to carry, allows light clients who don't verify channel existence to have some guidance when routing HTLCs, and finally may allow nodes to preserve a portion of bandwidth at all times. In the long term, this field can be used by implementations of AMP to guide payment splitting, as it becomes apparent to a node the largest possible HTLC one can route over a particular channel. This PR was made possible by the merge of lightningnetwork#1825, which enables older nodes to properly retain and verify signatures on updates that include new fields (like this new max HTLC field) that they haven't yet been updated to recognize. In addition, the new ChannelUpdate fields are added to the lnwire fuzzing tests. Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit, we add a field to the ChannelUpdate denoting the maximum HTLC we support sending over this channel, a field which was recently added to the spec. This field serves multiple purposes. In the short term, it enables nodes to signal the largest HTLC they're willing to carry, allows light clients who don't verify channel existence to have some guidance when routing HTLCs, and finally may allow nodes to preserve a portion of bandwidth at all times. In the long term, this field can be used by implementations of AMP to guide payment splitting, as it becomes apparent to a node the largest possible HTLC one can route over a particular channel. This PR was made possible by the merge of lightningnetwork#1825, which enables older nodes to properly retain and verify signatures on updates that include new fields (like this new max HTLC field) that they haven't yet been updated to recognize. In addition, the new ChannelUpdate fields are added to the lnwire fuzzing tests. Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit, we add a field to the ChannelUpdate denoting the maximum HTLC we support sending over this channel, a field which was recently added to the spec. This field serves multiple purposes. In the short term, it enables nodes to signal the largest HTLC they're willing to carry, allows light clients who don't verify channel existence to have some guidance when routing HTLCs, and finally may allow nodes to preserve a portion of bandwidth at all times. In the long term, this field can be used by implementations of AMP to guide payment splitting, as it becomes apparent to a node the largest possible HTLC one can route over a particular channel. This PR was made possible by the merge of lightningnetwork#1825, which enables older nodes to properly retain and verify signatures on updates that include new fields (like this new max HTLC field) that they haven't yet been updated to recognize. In addition, the new ChannelUpdate fields are added to the lnwire fuzzing tests. Co-authored-by: Johan T. Halseth <johanth@gmail.com>
Initially in lnd, we didn't store the extra TLV data that could be dangling off of gossip messages. This was fixed initially in lnd v0.5 with this PR: lightningnetwork#1825. Within the PR, we incorrect set the `ExtraOpaqueData` (extra TLV blob) of the `ChannelAnnouncement` to the value stored in `edge`, which is actually our channel update. As 6-ish years ago we didn't yet have anything that used the TLV gossip fields, this went unnoticed. Fast forward to 2024, we shipped an experimental version of inbounbd fees. This starts to store additional data in the `ExtraOpaqueData` field, the TLV for the inbound fee. Initially, everything is valid when the first `ChannelAnnouncement` is sent, but as soon as a user attempts to set an inbound fee policy, we'd incorrectly swap in that new serialized TLV for the _channel announcement_: lightningnetwork@841e243#diff-1eda595bbebe495bd74a6a0431c46b66cb4e8b53beb311067c010feac2665dcbR2560. Since we're just trying to generate a new `channel_update`, we don't also regenerate the signature for the `channel_announcement` message. As a result, we end up storing a `channel_announcement` with an invalid sig on disk, continuing to broadcast that to peers.
Initially in lnd, we didn't store the extra TLV data that could be dangling off of gossip messages. This was fixed initially in lnd v0.5 with this PR: lightningnetwork#1825. Within the PR, we incorrect set the `ExtraOpaqueData` (extra TLV blob) of the `ChannelAnnouncement` to the value stored in `edge`, which is actually our channel update. As 6-ish years ago we didn't yet have anything that used the TLV gossip fields, this went unnoticed. Fast forward to 2024, we shipped an experimental version of inbounbd fees. This starts to store additional data in the `ExtraOpaqueData` field, the TLV for the inbound fee. Initially, everything is valid when the first `ChannelAnnouncement` is sent, but as soon as a user attempts to set an inbound fee policy, we'd incorrectly swap in that new serialized TLV for the _channel announcement_: lightningnetwork@841e243#diff-1eda595bbebe495bd74a6a0431c46b66cb4e8b53beb311067c010feac2665dcbR2560. Since we're just trying to generate a new `channel_update`, we don't also regenerate the signature for the `channel_announcement` message. As a result, we end up storing a `channel_announcement` with an invalid sig on disk, continuing to broadcast that to peers.
Initially in lnd, we didn't store the extra TLV data that could be dangling off of gossip messages. This was fixed initially in lnd v0.5 with this PR: lightningnetwork#1825. Within the PR, we incorrect set the `ExtraOpaqueData` (extra TLV blob) of the `ChannelAnnouncement` to the value stored in `edge`, which is actually our channel update. As 6-ish years ago we didn't yet have anything that used the TLV gossip fields, this went unnoticed. Fast forward to 2024, we shipped an experimental version of inbounbd fees. This starts to store additional data in the `ExtraOpaqueData` field, the TLV for the inbound fee. Initially, everything is valid when the first `ChannelAnnouncement` is sent, but as soon as a user attempts to set an inbound fee policy, we'd incorrectly swap in that new serialized TLV for the _channel announcement_: lightningnetwork@841e243#diff-1eda595bbebe495bd74a6a0431c46b66cb4e8b53beb311067c010feac2665dcbR2560. Since we're just trying to generate a new `channel_update`, we don't also regenerate the signature for the `channel_announcement` message. As a result, we end up storing a `channel_announcement` with an invalid sig on disk, continuing to broadcast that to peers.
Initially in lnd, we didn't store the extra TLV data that could be dangling off of gossip messages. This was fixed initially in lnd v0.5 with this PR: lightningnetwork#1825. Within the PR, we incorrect set the `ExtraOpaqueData` (extra TLV blob) of the `ChannelAnnouncement` to the value stored in `edge`, which is actually our channel update. As 6-ish years ago we didn't yet have anything that used the TLV gossip fields, this went unnoticed. Fast forward to 2024, we shipped an experimental version of inbounbd fees. This starts to store additional data in the `ExtraOpaqueData` field, the TLV for the inbound fee. Initially, everything is valid when the first `ChannelAnnouncement` is sent, but as soon as a user attempts to set an inbound fee policy, we'd incorrectly swap in that new serialized TLV for the _channel announcement_: lightningnetwork@841e243#diff-1eda595bbebe495bd74a6a0431c46b66cb4e8b53beb311067c010feac2665dcbR2560. Since we're just trying to generate a new `channel_update`, we don't also regenerate the signature for the `channel_announcement` message. As a result, we end up storing a `channel_announcement` with an invalid sig on disk, continuing to broadcast that to peers.
In this commit, we fix an existing bug within the codebase. For all gossip messages we stored, validated and propagated, we wouldn't also include any extra data up to the full length of the message. As a result, in the future if we upgrade in the forwards compatible manner by adding new (possibly unknown fields to ourselves) to the end of a message, then we would be unable to properly validate an propagate the valid versions.
In order to fix this, both in the database and in our wire parsing, if after we parse the regular message there are any additional bytes, we'll also read those and store those in the database. We ensure that in all cases where we read these announcements from disk, we properly include the extra bytes in the wire messages before we write them out onto the socket.