Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

websocket: improve frame parsing #3447

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

tsctx
Copy link
Member

@tsctx tsctx commented Aug 12, 2024

Adjust to avoid creating a temporary buffer

@tsctx tsctx requested a review from KhafraDev August 12, 2024 14:10
@KhafraDev
Copy link
Member

I'd prefer if there was a benchmark to measure the change.

@tsctx
Copy link
Member Author

tsctx commented Aug 12, 2024

As an example, the 256Kib receive benchmark is 1.2x faster.

@KhafraDev
Copy link
Member

Benchmarking websockets is hard and there is no benchmark to verify.

@tsctx
Copy link
Member Author

tsctx commented Aug 13, 2024

Benchmarking websockets is hard and there is no benchmark to verify.

It is a simple benchmark using console.time.
For a correct benchmark, it should be a Parser-only benchmark.

@tsctx
Copy link
Member Author

tsctx commented Aug 13, 2024

This is a simple benchmark of the current Parser.

  • main
benchmark                  time (avg)             (min … max)       p75       p99      p999
------------------------------------------------------------- -----------------------------
parsing                   239 µs/iter  (29'500 ns … 2'398 µs)    242 µs  1'189 µs  2'202 µs
parsing (arraybuffer)     243 µs/iter  (25'400 ns … 9'173 µs)    227 µs  1'381 µs  5'795 µs
parsing (string)          459 µs/iter     (219 µs … 3'083 µs)    418 µs  1'990 µs  2'757 µs
  • this PR
benchmark                  time (avg)             (min … max)       p75       p99      p999
------------------------------------------------------------- -----------------------------
parsing                   906 ns/iter     (727 ns … 4'198 ns)    916 ns  3'199 ns  4'198 ns
parsing (arraybuffer)     211 µs/iter  (20'800 ns … 3'202 µs)    190 µs  1'239 µs  2'894 µs
parsing (string)          264 µs/iter     (166 µs … 4'089 µs)    231 µs  1'650 µs  3'126 µs
Script
import { bench, run } from "mitata";
import { opcodes } from "../../lib/web/websocket/constants.js";
import { toArrayBuffer, utf8Decode } from "../../lib/web/websocket/util.js";
import { ByteParser } from "../../lib/web/websocket/receiver.js";

function createFrame(opcode, data) {
  const length = data.length;

  let payloadLength = length;
  let offset = 2;

  if (length > 65535) {
    offset += 8;
    payloadLength = 127;
  } else if (length > 125) {
    offset += 2;
    payloadLength = 126;
  }

  const frame = Buffer.allocUnsafeSlow(length + offset);

  frame[0] = 0x80 | opcode;

  frame[1] = payloadLength;

  if (payloadLength === 126) {
    frame.writeUInt16BE(length, 2);
  } else if (payloadLength === 127) {
    frame[2] = frame[3] = 0;
    frame.writeUIntBE(length, 4, 6);
  }

  if (length !== 0) {
    frame.set(data, offset);
  }

  return frame;
}

const requestBody = createFrame(opcodes.BINARY, Buffer.from('a'.repeat(256 * 1024), 'utf8'));

let _resolve, _reject;

const parser = new ByteParser({
  onFail: (reason) => {
    _reject(reason);
  },
  onMessage: (opcode, data) => {
    _resolve(data);
  },
});

bench("parsing", () => {
  return new Promise((resolve, reject) => {
    _resolve = resolve;
    _reject = reject;
    parser.write(requestBody);
  });
});

bench("parsing (arraybuffer)", () => {
  return new Promise((resolve, reject) => {
    _resolve = (data) => resolve(toArrayBuffer(data));
    _reject = reject;
    parser.write(requestBody);
  });
});

bench("parsing (string)", () => {
  return new Promise((resolve, reject) => {
    _resolve = (data) => resolve(utf8Decode(data));
    _reject = reject;
    parser.write(requestBody);
  });
});

await run();

@KhafraDev
Copy link
Member

I'd prefer a benchmark that uses WebSocket, not websocket internals. If #3203 could be completed, and then this PR benchmarked against that, I wouldn't have any complaints.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants