Skip to content
This repository has been archived by the owner on Nov 15, 2019. It is now read-only.

resolves #1073: integrate socket collection #1083

Merged

Conversation

povilasb
Copy link
Contributor

@povilasb povilasb commented Dec 6, 2018

No description provided.

@povilasb povilasb force-pushed the integrate-socket-collection branch 2 times, most recently from c305ad6 to e9f91c8 Compare December 12, 2018 10:06
We replaced common::Socket with TcpSock from socket-collection. Then some tests
started failing. Turned out that common::Socket did a little trick after every
write(): it used to always reregister socket for Readable events. So we try
to retain the same behavior with TcpSock as well by reregistering socket readiness
manually where needed.
TCP listener was never used in case of direct connections. Hence, remove it.
socket_collection::UdpSock buffers the packets itself so no need to do that from the app layer.
We just make sure to UdpSock::write_to(None) when socket becomes writable.
@@ -103,7 +104,8 @@ impl<UID: Uid> ConnectionCandidate<UID> {
fn done(&mut self, core: &mut Core, poll: &Poll) {
let _ = core.remove_state(self.token);
let token = self.token;
let socket = mem::replace(&mut self.socket, Socket::default());
let socket = mem::replace(&mut self.socket, Default::default());
let _ = poll.reregister(&socket, token, Ready::readable(), PollOpt::edge());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm actually i want to get rid of this idiom of having to re-register to express exactly what we need etc. It's a laborious way to overly fine-tune and is prone to errors. The uniform idiom should be to just read in a loop. If you collect many msgs and only one is meant to be handled by the current state, then pass the remainder over to the next state. Similarly never re-register to write (or avoid it unless impossible to do without it). Just write instead. Start with registering once for read and write with edge notification and work your way from there.

This will however need a well thought out change so it's OK to do it in future PR. I'll put it as an issue.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue created here for future.

}
Ok(None) => return,
Err(e) => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm this is not correct anymore after your changes above. The deserialisation error due to 1 peer will now cause the collapse of entire ServDisc server and affect all other peers. So filter that separately and continue the loop in that case to cater to others.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, you're right. I'll fix this.

match msg {
DiscoveryMsg::Request { guid } => {
if self.listen && self.guid != guid {
self.reply_to.push_back(peer_addr);
self.write(core, poll)
self.respond_with_their_addr(peer_addr);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not just use self.write() which takes an Option and pass it Some() ? Makes it uniform with code elsewhere in all our libs. Also a single place to read from and write to a socket per state is better as we need to only look there for anything. So bring the functionality of respond_with..() over here inline and remove that newly created function.

…invalid

Also, includes some refactoring related to message sending.
@ustulation ustulation merged commit bafde11 into maidsafe-archive:master Dec 13, 2018
@povilasb povilasb deleted the integrate-socket-collection branch December 13, 2018 10:57
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants