Analysis of new Wakurtosis simulations regarding the 600 nodes anomaly
Analysis of K8 simulations regarding the 600 nodes anomaly
analysis-shadow:vac:shadow-gossipsub-analysis
worked on Topology slices
(added more RAM to the server)
analysis-shadow:waku:shadow-waku-relay-analysis
Run 600 nodes NWaku Shadow simulations with and without load
analysis:nomos:simulation-analysis
The network delay/bandwidth tuning, readjusting the probabilities, none of them helped. The bug(s) cannot be side-steped in any meaningful way.
New issue: for > 10 views, the disk usage blows up. 1.7 TERABYTES; and the output is just text files! This was quite unexpected; we now have yet another scalability issue with the nomos sim.
spent couple of days on the Rust code and worked on adjustments. None of them helped with the bug.
analysis-gsub-model:vac:refactoring
Tuned/cleanedup to the control messages code
eng-10ktool:vac:bandwidth-test:
Machines are no longer blocked
Added Kubernetes network policies to void having machines blocked.
600 node simulations with Kubernetes to try to replicate 0 rate anomaly
Started an aproximation of waku-simulator with Kurtosis
Meeting with Slava to investigate prometheus dropping container labeling information
software-testing:waku:test-automation-js-waku
Helped Danish with implementing the testing part of a Static Sharding PR