Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scatter parameters once in TPI #925

Merged
merged 5 commits into from
Apr 19, 2024
Merged

Conversation

talumbau
Copy link
Member

  • Similar to the change in SS, put the parameters in global scope, then in inner_loop retrieve if possible. If they are not present, scatter them once for all future execution of inner_loop.

@rickecon
Copy link
Member

@talumbau. I ran all the GH CI tests plus the ones marked as local, and I got a ton of the same errors as the one error in the GH CI test_TPI.py test_inner_loop() function of NameError: name 'client' is not defined. This works in the code, but it doesn't work in the testing. I tried adding dask_client as an input to the test_inner_loop function but I probably need to declare client=dask_client inside the function.

In any event, below is my run of the full testing suite locally. You can ignore the three failures in test_txfunc.py. The rest of the test failures are all NameError: name 'client' is not defined.

=========================== short test summary info ===========================
FAILED tests/test_TPI.py::test_inner_loop - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_full_run[Baseline, balanced budget] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_full_run[Baseline] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_full_run[Reform] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_full_run[Reform, baseline spending] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_full_run[Baseline, small open] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_full_run[Baseline, small open some periods] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_full_run[Baseline, delta_tau = 0] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_full_run[Baseline, Kg > 0] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_full_run[Baseline, M=3 non-zero Kg] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_full_run[Baseline, M=3 zero Kg] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_full_run[Baseline, M!=I] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI[Baseline] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI[Reform] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_extra[Baseline, balanced budget] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_extra[Baseline, small open] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_extra[Baseline, small open for some periods] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_extra[Baseline, delta_tau = 0] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_extra[Baseline] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_extra[Reform, baseline spending] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_extra[Baseline, Kg>0] - NameError: name 'client' is not defined
FAILED tests/test_basic.py::test_run_small[TPI] - NameError: name 'client' is not defined
FAILED tests/test_basic.py::test_constant_demographics_TPI - NameError: name 'client' is not defined
FAILED tests/test_basic.py::test_constant_demographics_TPI_small_open - NameError: name 'client' is not defined
FAILED tests/test_execute.py::test_runner_baseline_reform - NameError: name 'client' is not defined
FAILED tests/test_run_example.py::test_run_ogcore_example - assert False
FAILED tests/test_run_ogcore.py::test_run_micro_macro - NameError: name 'client' is not defined
FAILED tests/test_txfunc.py::test_txfunc_est[DEP] - assert False
FAILED tests/test_txfunc.py::test_tax_func_loop - assert False
FAILED tests/test_txfunc.py::test_tax_func_estimate - assert False
========= 30 failed, 507 passed, 16786 warnings in 2495.55s (0:41:35) =========
=========================== short test summary info ===========================
FAILED tests/test_TPI.py::test_inner_loop - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_full_run[Baseline, balanced budget] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_full_run[Baseline] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_full_run[Reform] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_full_run[Reform, baseline spending] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_full_run[Baseline, small open] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_full_run[Baseline, small open some periods] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_full_run[Baseline, delta_tau = 0] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_full_run[Baseline, Kg > 0] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_full_run[Baseline, M=3 non-zero Kg] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_full_run[Baseline, M=3 zero Kg] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_full_run[Baseline, M!=I] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI[Baseline] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI[Reform] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_extra[Baseline, balanced budget] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_extra[Baseline, small open] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_extra[Baseline, small open for some periods] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_extra[Baseline, delta_tau = 0] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_extra[Baseline] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_extra[Reform, baseline spending] - NameError: name 'client' is not defined
FAILED tests/test_TPI.py::test_run_TPI_extra[Baseline, Kg>0] - NameError: name 'client' is not defined
FAILED tests/test_basic.py::test_run_small[TPI] - NameError: name 'client' is not defined
FAILED tests/test_basic.py::test_constant_demographics_TPI - NameError: name 'client' is not defined
FAILED tests/test_basic.py::test_constant_demographics_TPI_small_open - NameError: name 'client' is not defined
FAILED tests/test_execute.py::test_runner_baseline_reform - NameError: name 'client' is not defined
FAILED tests/test_run_example.py::test_run_ogcore_example - assert False
FAILED tests/test_run_ogcore.py::test_run_micro_macro - NameError: name 'client' is not defined
FAILED tests/test_txfunc.py::test_txfunc_est[DEP] - assert False
FAILED tests/test_txfunc.py::test_tax_func_loop - assert False
FAILED tests/test_txfunc.py::test_tax_func_estimate - assert False
========= 30 failed, 507 passed, 16786 warnings in 2495.55s (0:41:35) =========

@rickecon
Copy link
Member

The great news is that there are no concurrent.futures errors! @talumbau @jdebacker

@talumbau
Copy link
Member Author

Oops. Too much copy/paste. The solution has to be a bit different here because we don't actually scatter inside the inner_loop, as was done in SS. I'll fix it up and run tests locally before pushing again.

 - Similar to the change in SS, put the parameters in global
   scope, then in `inner_loop` retrieve if possible. If they are not
   present, scatter them once for all future execution of
   `inner_loop`.
@rickecon
Copy link
Member

@talumbau. Just a warning that the full battery of tests running pytest from your local machine can take up to 7 hours. And it is the test_TPI.py tests that take the majority of the time, although I think test_run_ogcore.py and test_run_example.py take a while.

@talumbau
Copy link
Member Author

OK just finishing running the example here. The good news is that I think this change gets rid of all of the warnings about excessive garbage collection from Dask. However, I noticed that TPI is taking a long time for me and it looks like one process is doing a lot of work and many of the others are doing almost nothing. See this screenshot of top during the run:

Screenshot from 2024-04-16 09-00-17

so it looks like one of the "categories" is doing more computational work than the others.

@talumbau
Copy link
Member Author

I fixed the issue and pushed the changes to my branch. Running test_TPI.py right now. Should be done by tomorrow.

@codecov-commenter
Copy link

codecov-commenter commented Apr 17, 2024

Codecov Report

Attention: Patch coverage is 50.00000% with 3 lines in your changes are missing coverage. Please review.

Project coverage is 73.44%. Comparing base (66d6223) to head (9e0e312).

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #925      +/-   ##
==========================================
+ Coverage   73.43%   73.44%   +0.01%     
==========================================
  Files          19       19              
  Lines        4641     4643       +2     
==========================================
+ Hits         3408     3410       +2     
  Misses       1233     1233              
Flag Coverage Δ
unittests 73.44% <50.00%> (+0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files Coverage Δ
ogcore/__init__.py 100.00% <100.00%> (ø)
ogcore/TPI.py 35.73% <40.00%> (+0.33%) ⬆️

@talumbau
Copy link
Member Author

OK, I confirm that run_og_usa.py example works with this PR and that test_TPI.py runs with all passes using the above:

==================================================== 27 passed, 1392 warnings in 62705.23s (17:25:05) ====================================================

So, now the most helpful thing would be for someone else to do an A/B comparison for some of these tests (or maybe just run_og_usa.py and tell me the impact on runtimes. I think I am consistently getting runtimes that are longer than what @rickecon and @jdebacker report. For example, here is the output from the OG-USA example:

Checking time path for violations of constraints.
Max Euler error, savings:  1.3273826482418372e-12
Max Euler error labor supply:  1.715960706860642e-12
Time path iteration complete.
It took 43754.84175467491 seconds to get that part done.
run time =  43754.84184098244

So around 12 hours on runtime. But I don't believe that my longer runtimes have anything to do with these dask optimizations I'm doing. So, @rickecon and @jdebacker can you do an A/B comparison of just the run_og_usa.py example with and without this PR and tell me the times you report? Also let me know if the garbage collector warnings go away (they did for me)

@rickecon
Copy link
Member

@talumbau I currently have all the OG-Core tests running on my machine. I started them last night, so they should finish in the next hour. Then I will run and time, run_og_usa.py, making sure I update the ogusa-dev conda environment to this new OG-Core.

Will you please make the following additions to this PR.

  • Update the version to 0.11.6 in setup.py and in ogcore/__init__.py
  • Add the following section at the beginning of CHANGELOG.md
## [0.11.6] - 2024-04-17 14:00:00

### Added

- Scatters parameters once in `TPI.py`
  • Add the following line at the end of CHANGELOG.md
[0.11.6]: https://github.com/PSLmodels/OG-Core/compare/v0.11.5...v0.11.6

@rickecon
Copy link
Member

@talumbau @jdebacker. Full set of tests looks great. Only tests failing locally are those three test_txfunc.py tests that we already know about and are working on fixing. Now I'll just run run_og_usa.py with this new ogcore package and time my results.

(ogcore-dev) richardevans@Richards-MacBook-Pro-2 OG-Core % pytest
====================================================== test session starts ======================================================
platform darwin -- Python 3.11.8, pytest-8.1.1, pluggy-1.4.0
rootdir: /Users/richardevans/Docs/Economics/OSE/OG-Core
configfile: pytest.ini
testpaths: ./tests
plugins: cov-5.0.0, anyio-4.3.0, xdist-3.5.0
collected 537 items                                                                                                             

tests/test_SS.py ...................................                                                                      [  6%]
tests/test_TPI.py ...........................                                                                             [ 11%]
tests/test_aggregates.py .....................................                                                            [ 18%]
tests/test_basic.py ....                                                                                                  [ 19%]
tests/test_demographics.py ................                                                                               [ 22%]
tests/test_elliptical_u_est.py .......                                                                                    [ 23%]
tests/test_execute.py .                                                                                                   [ 23%]
tests/test_firm.py .....................................................................                                  [ 36%]
tests/test_fiscal.py ...................                                                                                  [ 40%]
tests/test_household.py ..............................................                                                    [ 48%]
tests/test_output_plots.py ...............................................                                                [ 57%]
tests/test_output_tables.py ..............                                                                                [ 59%]
tests/test_parameter_plots.py ........................................                                                    [ 67%]
tests/test_parameter_tables.py .......                                                                                    [ 68%]
tests/test_parameters.py ..............                                                                                   [ 71%]
tests/test_run_example.py ..                                                                                              [ 71%]
tests/test_run_ogcore.py .                                                                                                [ 71%]
tests/test_tax.py ......................................                                                                  [ 78%]
tests/test_txfunc.py .....F.......F..........F.                                                                           [ 83%]
tests/test_user_inputs.py .........                                                                                       [ 85%]
tests/test_utils.py ..............................................................................                        [100%]
==================================================== short test summary info ====================================================
FAILED tests/test_txfunc.py::test_txfunc_est[DEP] - assert False
FAILED tests/test_txfunc.py::test_tax_func_loop - assert False
FAILED tests/test_txfunc.py::test_tax_func_estimate - assert False
================================= 3 failed, 534 passed, 16826 warnings in 43202.49s (12:00:02) ==================================

@rickecon
Copy link
Member

@talumbau. I just submitted a PR to your branch that has the version updates to setup.py, ogcore/__init__.py, and CHANGELOG.md as well as removes the Python 3.9 tests from .github/workflows/build_and_test.py and pyproject.toml. I will merge this PR as soon as you merge in my PR and the GH Action CI tests for this PR all pass.

@rickecon
Copy link
Member

rickecon commented Apr 17, 2024

Rick's run of run_og_usa.py statistics with updated OG-Core.

In summary, the baseline TPI takes 1 hour and 34 minutes, and the reform TPI takes 1 hour and 21 minutes.

I created a branch off my updated OG-USA master branch, and added an environment2.yml file that did not have the pip install ogcore line that created a new ogusa-dev2 conda environment. I activated that environment, navigated to my updated OG-Core repository (updated with my PR to TJ's branch) and did a pip install -e .. Then I navigated to the updated OG-USA master branch on my machine and did a pip install -e ..

Baseline steady state
The baseline steady-state process had no distributed.utils_perf - WARNING - full garbage collections warnings.

SS debt =  1.3797697609894326 0.009002672987621484
IO:  (1, 1) , C:  (1,)
Steady state government spending is negative to satisfy budget
Checking constraints on capital, labor, and consumption.
	There were no violations of the constraints on labor  supply.
	There were no violations of the constraints on  consumption.

Baseline transition path (1 hour, 34 min, 22.8 sec)
The TPI process still had a ton of distributed.utils_perf - WARNING - full garbage collections warnings.

Maximum debt ratio:  2.0001622488980253
w diff:  3.6262606339931835e-07 -8.331511813786108e-08
r diff:  1.0371116021534732e-08 -4.8755949597079073e-08
r_p diff:  9.287470473240411e-09 -3.014206673146447e-08
p_m diff:  0.0 0.0
BQ diff:  5.787611687124716e-08 -4.224109490663652e-08
TR diff:  4.2260502805535616e-08 -3.197596561838045e-08
Iteration: 24
	Distance: 9.00580025994944e-06
Max absolute value resource constraint error: 2.888291650224306e-07
Checking time path for violations of constraints.
Max Euler error, savings:  3.26316751397826e-12
Max Euler error labor supply:  1.1193268534270828e-12
Time path iteration complete.
It took 5662.765335798264 seconds to get that part done.
run time =  5662.765414237976

Reform steady-state
The steady-state process had some distributed.utils_perf - WARNING - full garbage collections warnings.

SS debt =  1.3729583344323621 0.008958230032277593
IO:  (1, 1) , C:  (1,)
Steady state government spending is negative to satisfy budget
Checking constraints on capital, labor, and consumption.
	There were no violations of the constraints on labor  supply.
	There were no violations of the constraints on  consumption.

Reform transition path (1 hour, 21 min, 2.3 sec)
The TPI process still had a ton of distributed.utils_perf - WARNING - full garbage collections warnings.

Maximum debt ratio:  2.000158164759812
w diff:  3.7282303866348343e-07 -1.7090183446200058e-07
r diff:  1.672847851214021e-08 -4.945117810378763e-08
r_p diff:  1.2900811023619507e-08 -2.998762382855347e-08
p_m diff:  0.0 0.0
BQ diff:  5.6923739673309104e-08 -4.164328062938871e-08
TR diff:  4.6912617693295466e-08 -3.121796118832343e-08
Iteration: 24
	Distance: 9.596531615750301e-06
Max absolute value resource constraint error: 2.843441535491098e-07
Checking time path for violations of constraints.
Max Euler error, savings:  3.0148106233696126e-12
Max Euler error labor supply:  1.3500311979441904e-12
Time path iteration complete.
It took 4862.318516969681 seconds to get that part done.
run time =  4862.318587064743
Percentage changes in aggregates: Year                    Variable  2023  2024  2025  2026  2027  2028  2029  2030  2031  2032  2023-2032    SS
0                    GDP ($Y_t$) -0.14 -0.13 -0.12 -0.11 -0.09 -0.07 -0.04 -0.01  0.02  0.06      -0.06 -0.49
1            Consumption ($C_t$) -0.09 -0.17 -0.22 -0.27 -0.31 -0.35 -0.38 -0.40 -0.42 -0.44      -0.30 -1.03
2          Capital Stock ($K_t$) -0.34 -0.39 -0.41 -0.42 -0.42 -0.40 -0.37 -0.33 -0.27 -0.21      -0.36 -1.84
3                  Labor ($L_t$) -0.01  0.02  0.06  0.09  0.11  0.14  0.16  0.18  0.20  0.22       0.12  0.34
4     Real interest rate ($r_t$) -3.19 -3.07 -2.99 -2.94 -2.90 -2.89 -2.89 -2.91 -2.95 -2.99      -2.97 -0.99
5                      Wage rate -0.13 -0.16 -0.18 -0.19 -0.20 -0.20 -0.20 -0.19 -0.18 -0.16      -0.18 -0.83

@rickecon
Copy link
Member

Rick's run of run_og_usa.py statistics with the old (current) OG-Core version 0.11.5.

In summary, the baseline TPI takes 1 hour and 43 minutes, and the reform TPI takes 1 hour and 36 minutes.

I created fresh ogusa-dev branch using the environment.yml file which dowloads the latest version of ogcore (version 0.11.5) from PyPI.org. This version does not include the PR from 2 days ago that updated the scattering of the parameters object in SS.py.

Baseline steady state
The baseline steady-state had a lot of distributed.utils_perf - WARNING - full garbage collections warnings.

SS debt =  1.3797697609894326 0.009002672987621484
IO:  (1, 1) , C:  (1,)
Steady state government spending is negative to satisfy budget
Checking constraints on capital, labor, and consumption.
	There were no violations of the constraints on labor  supply.
	There were no violations of the constraints on  consumption.

Baseline transition path (1 hour, 42 min, 59.1 sec)
The TPI process only had distributed.utils_perf - WARNING - full garbage collections warnings during the first two iterations.

Maximum debt ratio:  2.0001622488980253
w diff:  3.6262606339931835e-07 -8.331511813786108e-08
r diff:  1.0371116021534732e-08 -4.8755949597079073e-08
r_p diff:  9.287470473240411e-09 -3.014206673146447e-08
p_m diff:  0.0 0.0
BQ diff:  5.787611687124716e-08 -4.224109490663652e-08
TR diff:  4.2260502805535616e-08 -3.197596561838045e-08
Iteration: 24
	Distance: 9.00580025994944e-06
Max absolute value resource constraint error: 2.888291650224306e-07
Checking time path for violations of constraints.
Max Euler error, savings:  3.26316751397826e-12
Max Euler error labor supply:  1.1193268534270828e-12
Time path iteration complete.
It took 6179.063030004501 seconds to get that part done.
run time =  6179.0630939006805

Reform steady-state
The steady-state process had a lot of distributed.utils_perf - WARNING - full garbage collections warnings. It also took a long time.

SS debt =  1.3729583344323621 0.008958230032277593
IO:  (1, 1) , C:  (1,)
Steady state government spending is negative to satisfy budget
Checking constraints on capital, labor, and consumption.
	There were no violations of the constraints on labor  supply.
	There were no violations of the constraints on  consumption.
/opt/anaconda3/envs/ogusa-dev/lib/python3.11/site-packages/ogcore/SS.py:1416: UserWarning: Warning: The combination of the tax policy you specified and your target debt-to-GDP ratio results in an infeasible amount of government spending in order to close the budget (i.e., G < 0)

Reform transition path (1 hour, 35 min, 33.8 sec)
The TPI process still had distributed.utils_perf - WARNING - full garbage collections warnings.

Maximum debt ratio:  2.000158164759812
w diff:  3.7282303866348343e-07 -1.7090183446200058e-07
r diff:  1.672847851214021e-08 -4.945117810378763e-08
r_p diff:  1.2900811023619507e-08 -2.998762382855347e-08
p_m diff:  0.0 0.0
BQ diff:  5.6923739673309104e-08 -4.164328062938871e-08
TR diff:  4.6912617693295466e-08 -3.121796118832343e-08
Iteration: 24
	Distance: 9.596531615750301e-06
Max absolute value resource constraint error: 2.843441535491098e-07
Checking time path for violations of constraints.
Max Euler error, savings:  3.0148106233696126e-12
Max Euler error labor supply:  1.3500311979441904e-12
Time path iteration complete.
It took 5733.783985853195 seconds to get that part done.
run time =  5733.784060716629
Percentage changes in aggregates: Year                    Variable  2023  2024  2025  2026  2027  2028  2029  2030  2031  2032  2023-2032    SS
0                    GDP ($Y_t$) -0.14 -0.13 -0.12 -0.11 -0.09 -0.07 -0.04 -0.01  0.02  0.06      -0.06 -0.49
1            Consumption ($C_t$) -0.09 -0.17 -0.22 -0.27 -0.31 -0.35 -0.38 -0.40 -0.42 -0.44      -0.30 -1.03
2          Capital Stock ($K_t$) -0.34 -0.39 -0.41 -0.42 -0.42 -0.40 -0.37 -0.33 -0.27 -0.21      -0.36 -1.84
3                  Labor ($L_t$) -0.01  0.02  0.06  0.09  0.11  0.14  0.16  0.18  0.20  0.22       0.12  0.34
4     Real interest rate ($r_t$) -3.19 -3.07 -2.99 -2.94 -2.90 -2.89 -2.89 -2.91 -2.95 -2.99      -2.97 -0.99
5                      Wage rate -0.13 -0.16 -0.18 -0.19 -0.20 -0.20 -0.20 -0.19 -0.18 -0.16      -0.18 -0.83

@rickecon
Copy link
Member

rickecon commented Apr 18, 2024

@talumbau and @jdebacker. Here is a summary of my two sets of OG-USA runs on the old OG-Core (v.0.11.5) and on the new OG-Core (v.0.11.6).

SS baseline Dask warnings TPI baseline comp time TPI baseline Dask warnings SS reform Dask warnings TPI reform comp time TPI reform Dask warnings
Old OG-Core (v.0.11.5) many 1:42:59.1 few many 1:35:33.8 many
New OG-Core (v.0.11.6) none 1:34:22.8 many few 1:21:2.3 many
Pct. change decrease -8.4% increase decrease -15.2% no change

@jdebacker
Copy link
Member

Great summary @rickecon!

I ran the baseline on this branch. Baseline TPI took 49 minutes. I had a few garbage collection warnings in TPI (none in SS), but there seemed to be fewer than on OG-Core 0.11.5.

But I will try to do the reform run in the next day or two -- seems there were more issues with that?

@rickecon
Copy link
Member

@jdebacker. I don't see any issues with this PR. I want to merge it in and get that new version of OG-Core up. Just waiting for those last few commits to go in.

@rickecon
Copy link
Member

@jdebacker. Your 2x faster runtime is because you have the Anaconda distribution for Apple M1. I haven't tested my machine in a year, but I should check now if my one other package now runs on that software.

@talumbau
Copy link
Member Author

This is great info, thanks so much! It's the right answer to move this scatter as I'm doing in this PR, but it doesn't move the needle much on runtime. Something really strange is happening with my runtimes - it's literally taking me 10 times longer to run my code on a 128 core AMD Ryzen Threadripper machine with 256GB of RAM. I have a MacBook Air that's a few years old. Maybe I can try to run on that to see if it will actually do better than this Linux workstation. So next steps would be:

  • I merge in Rick's changes and update this PR
  • we get this submitted
  • then start trying to understand the huge disparity in runtimes.

@rickecon can you post info on the machine you ran on? I'm assuming it's some kind of MacBookPro? CPU, RAM, MacOS version, etc.

talumbau and others added 2 commits April 18, 2024 20:18
@rickecon
Copy link
Member

rickecon commented Apr 19, 2024

Thanks so much for this update @talumbau. That is a big step forward for the project to have some better memory management. Merging as soon as all the tests pass (I updated the date and time of version update in CHANGELOG.md).

@rickecon rickecon linked an issue Apr 19, 2024 that may be closed by this pull request
@rickecon rickecon merged commit 8f1d770 into PSLmodels:master Apr 19, 2024
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Remove Python 3.9 tests
4 participants