-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Broadcasted gaussian integral #494
base: develop
Are you sure you want to change the base?
Conversation
PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here. PR Reviewer Guide 🔍
|
idx1: Sequence[int], | ||
idx2: Sequence[int], | ||
measure: float = -1, | ||
batched: bool = False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we could avoid needing a batched argument if we enforced Abc1
, Abc2
to be batched by default
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True, but then one needs to go through hoops in order to use this function. Perhaps I can simply detect if the triple isn't batched and batch it if needed, and finally return it as it was given.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm what hoops exactly? If this is a method we only use internally then things should be batched already (assuming we're consistent with that and if we're not we should fix it). If this is something we expose to users then yeah I'm okay with a batched argument
X = math.block([[Z, I], [I, Z]]) | ||
M = math.gather(math.gather(A, idx, axis=-1), idx, axis=-2) + X * measure | ||
bM = math.gather(b, idx, axis=-1) | ||
cpart1 = math.sqrt(math.cast((-1) ** m / math.det(M), "complex128")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm remembering having to introduce some logic for handling the case when math.det(M)
is zero (see complex_gaussian_integral). E.g. this was an issue when calling (I believe) .normalize on QuadratureEigenstate. Should we have that here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure. What were you doing in those situations?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
E.g. see line 136 in complex_gaussian_integral. I check if det is 0 and if it is handle it by using np.inf
User description
Context:
When circuit components have a large batch dimension, it's not efficient to loop over it when computing gaussian integrals.
Description of the Change:
Gaussian integrals now can be computed using broadcasting rules
Benefits:
Faster (about 10x)
Possible Drawbacks:
Implemented the version where two Abc triples are passed, not the one where a single Abc triple is being contracted
Related GitHub Issues:
None
PR Type
Enhancement, Tests
Description
complex_gaussian_integral_2
function ingaussian_integrals.py
that supports batched inputs and uses broadcasting rules for efficient computation.test_gaussian_integrals.py
to validate the functionality of the new batched Gaussian integral computation, including tests for both batched and non-batched inputs and polynomialc
parameter handling.Changes walkthrough 📝
gaussian_integrals.py
Add batched Gaussian integral computation with broadcasting
mrmustard/physics/gaussian_integrals.py
complex_gaussian_integral_2
function with support forbatched inputs.
rules.
test_gaussian_integrals.py
Add tests for batched Gaussian integral computation
tests/test_physics/test_gaussian_integrals.py
complex_gaussian_integral_2
function.c
parameter handling.