-
Notifications
You must be signed in to change notification settings - Fork 15
Improve handling of Gaussians distributions #471
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Need to discuss how to handle empirically constructed Gaussians. Currently, we have explicit specialised classes to handle them, e.g. Lastly, regarding compatibility with |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #471 +/- ##
===========================================
- Coverage 90.53% 80.40% -10.14%
===========================================
Files 96 104 +8
Lines 5983 7057 +1074
===========================================
+ Hits 5417 5674 +257
- Misses 566 1383 +817 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
We now have # Anisotropic empirical distribution from k samples at n sampling locations, each with d dimensions
k, n, d = 1000, 50, 3
samples = torch.rand(k, n, d)
dist0 = Empirical(samples)
# Isotropic empirical distribution (set off-diagonal elements to zero)
samples = torch.rand(k, n, d)
dist1 = Diagonal.from_dense(Empirical(samples))
# Just to demonstrate the .to_dense() method
dist1 = Diagonal.from_dense(dist1.to_dense())
# We can combine them into an ensemble
dist2 = Ensemble([dist0, dist1])
print(dist2.logdet(), dist2.trace(), dist2.max_eig()) tensor(-375.8140) tensor(12.4797) tensor(0.1201) |
This is looking great! I just want to confirm we all agree with where this code is meant to sit and how it is meant to be used. My understanding is that it will be within the active learning module and these classes are used to reshape the covariance matrix of the torch distribution returned by AutoEmulate ( |
It would also help me if we had some examples to illustrate when/how the different covariance structures emerge. |
This pull request is intended to improve the handling of Gaussians in AutoEmulate and standardize the outputs active learners can expects from emulators.
Implemented covariance structures: