Lc0
Moderators: Elijah, Igbo, timetraveller
-
- Forum Contributions
- Points: 13 638,00
- Posts: 170
- Joined: 04/11/2019, 15:35
- Status: Offline (Active 1 Month, 2 Weeks, 1 Day, 12 Hours, 11 Minutes ago)
- Topics: 4
- Reputation: 23
- Has thanked: 99 times
- Been thanked: 27 times
Lc0
Lc0 v0.26.3
▪Starting with this release, we are distributing two packages for windows with Nvidia GPUs: the cuda package and the cudnn package. The cudnn package is what we used to distribute so far (but we called it cuda), and comes with the same versions of cuda and cudnn dlls we were using for the last few months. The new cuda package comes with cuda 11.1 dlls and requires at least version 456.38 of the windows Nvidia drivers, and should give better performance on RTX cards and in particular the new RTX 30XX cards.
▪Notes:
1. The cudnn package will work as-is in existing setups, but for the cuda package you may have to replace cudnn with cuda (or cuda auto or cuda-fp16 ) as a backend (if specified) - this will certainly be necessary for multi-gpu setups.
2. Some testing indicates that cuda 11.1 may be slower for GTX 10XX cards, so owners of older cards may want to stay with the cudnn package. If your testing shows otherwise do let us know.
DOWNLOAD HERE
▪Starting with this release, we are distributing two packages for windows with Nvidia GPUs: the cuda package and the cudnn package. The cudnn package is what we used to distribute so far (but we called it cuda), and comes with the same versions of cuda and cudnn dlls we were using for the last few months. The new cuda package comes with cuda 11.1 dlls and requires at least version 456.38 of the windows Nvidia drivers, and should give better performance on RTX cards and in particular the new RTX 30XX cards.
▪Notes:
1. The cudnn package will work as-is in existing setups, but for the cuda package you may have to replace cudnn with cuda (or cuda auto or cuda-fp16 ) as a backend (if specified) - this will certainly be necessary for multi-gpu setups.
2. Some testing indicates that cuda 11.1 may be slower for GTX 10XX cards, so owners of older cards may want to stay with the cudnn package. If your testing shows otherwise do let us know.
DOWNLOAD HERE
You're Welcome
-
- Forum Contributions
- Points: 10 937,00
- Posts: 83
- Joined: 04/11/2019, 0:27
- Status: Offline (Active 7 Months, 3 Weeks, 4 Days, 5 Hours, 1 Minute ago)
- Topics: 1
- Reputation: 91
- Has thanked: 32 times
- Been thanked: 134 times
Lc0
You should find it on this page:Lopan wrote:Sorry to ask and where to download T60.SV.JH.92-270. Thank you in advance for the information.
https://github.com/jhorthos/lczero-training/wiki/Leela-Training
-
- Forum Contributions
- Points: 33 999,00
- Posts: 2585
- Joined: 05/02/2020, 10:42
- Status: Offline (Active 4 Weeks, 1 Day, 20 Hours, 53 Minutes ago)
- Medals: 2
- Topics: 194
- Reputation: 7481
- Has thanked: 6579 times
- Been thanked: 6863 times
Lc0
Lc0 v0.27.0-rc1
v0.27.0-rc1
@Tilps Tilps released this 5 hours ago
Fix a bug which meant position ... moves ... didn't work if the moves went off the end of the existing tree. (Which happens normally when playing from an opening book.)
Go here for the binaries >> https://github.com/LeelaChessZero/lc0/releases
-
- Global moderators
- Points: 6 305,00
- Forum Contributions
- Posts: 2145
- Joined: 01/11/2019, 14:27
- Status: Offline (Active 11 Hours, 36 Minutes ago)
- Medals: 2
- Topics: 351
- Reputation: 382
- Location: Biergarten
- Has thanked: 1917 times
- Been thanked: 4086 times
Lc0
v0.27.0-rc2
-Fix additional cases where 'invalid move' could be incorrectly reported.
-Replace WDL softmax in cudnn backend with same implementation as cuda
backend. This fixes some inaccuracy issues that were causing training
data to be rejected at a fairly low frequency.
-Ensure that training data Q/D pairs form valid WDL targets even if there
is accumulated drift in calculation.
-Fix for the calculation of the 'best q is proven' bit in training data.
-Multiple fixes for timelosses and infinite instamoving in smooth time
manager. Smooth time manager now made default after these fixes.
https://github.com/LeelaChessZero/lc0/releases/tag/v0.27.0-rc2
-Fix additional cases where 'invalid move' could be incorrectly reported.
-Replace WDL softmax in cudnn backend with same implementation as cuda
backend. This fixes some inaccuracy issues that were causing training
data to be rejected at a fairly low frequency.
-Ensure that training data Q/D pairs form valid WDL targets even if there
is accumulated drift in calculation.
-Fix for the calculation of the 'best q is proven' bit in training data.
-Multiple fixes for timelosses and infinite instamoving in smooth time
manager. Smooth time manager now made default after these fixes.
https://github.com/LeelaChessZero/lc0/releases/tag/v0.27.0-rc2
-
- Forum Contributions
- Points: 10 937,00
- Posts: 83
- Joined: 04/11/2019, 0:27
- Status: Offline (Active 7 Months, 3 Weeks, 4 Days, 5 Hours, 1 Minute ago)
- Topics: 1
- Reputation: 91
- Has thanked: 32 times
- Been thanked: 134 times
Lc0
v0.27.0 Official Release:
Hot off the presses, the latest greatest from the Lc0 team.
Just in time to be the first meal for the new Stockfish 13.
https://github.com/LeelaChessZero/lc0/releases
Hot off the presses, the latest greatest from the Lc0 team.
Just in time to be the first meal for the new Stockfish 13.
https://github.com/LeelaChessZero/lc0/releases
-
- Inactive User
- Points: 6 000,00
- Posts: 15
- Joined: 05/06/2021, 13:00
- Status: Offline (Active 2 Years, 9 Months, 2 Weeks, 2 Days, 11 Hours, 43 Minutes ago)
- Topics: 0
- Reputation: 2
- Been thanked: 6 times
Lc0
Leela v0.28.0-rc1
@borg323 borg323 released this 6 hours ago
https://github.com/LeelaChessZero/lc0/releases
Multigather is now made the default (and also improved). Some search settings
have changed meaning, so if you have modified values please discard them.
Specifically, max-collision-events, max-collision-visits and
max-out-of-order-evals-factor have changed default values, but other options
also affect the search. Similarly, check that your gui is not caching the old
values.
Performance improvements for the cuda/cudnn backends.
Support for policy focus during training.
Larger/stronger 15b default net for all packages except android, blas and dnnl
that get a new 10b network.
The distributed binaries come with the mimalloc memory allocator for better
performance when a large tree has to be destroyed (e.g. after an unexpected
move).
The legacy time manager will use more time for the first move after a long
book line.
The --preload command line flag will initialize the backend and load the
network during startup.
A 'fen' command was added as a UCI extension to print the current position.
Experimental onednn backend for recent intel cpus and gpus.
@borg323 borg323 released this 6 hours ago
https://github.com/LeelaChessZero/lc0/releases
Multigather is now made the default (and also improved). Some search settings
have changed meaning, so if you have modified values please discard them.
Specifically, max-collision-events, max-collision-visits and
max-out-of-order-evals-factor have changed default values, but other options
also affect the search. Similarly, check that your gui is not caching the old
values.
Performance improvements for the cuda/cudnn backends.
Support for policy focus during training.
Larger/stronger 15b default net for all packages except android, blas and dnnl
that get a new 10b network.
The distributed binaries come with the mimalloc memory allocator for better
performance when a large tree has to be destroyed (e.g. after an unexpected
move).
The legacy time manager will use more time for the first move after a long
book line.
The --preload command line flag will initialize the backend and load the
network during startup.
A 'fen' command was added as a UCI extension to print the current position.
Experimental onednn backend for recent intel cpus and gpus.
-
- I've been banned!
- Points: 6 000,00
- Posts: 1828
- Joined: 05/11/2019, 6:35
- Status: Offline (Active 1 Year, 4 Months, 1 Week, 3 Days, 20 Hours, 29 Minutes ago)
- Topics: 318
- Reputation: 2765
- Location: ARMONIA
- Has thanked: 2241 times
- Been thanked: 3120 times
Lc0
Download from here:Baba wrote:xrf05. Dear friend where we download 744005 weight file.
https://training.lczero.org/networks/2
744005 2 cfc44784 2332.00 65002 10 128 2021-06-09 18:01:08 +00:00 2332
Make Someone Happy Today...
-
- I've been banned!
- Points: 9 979,00
- Posts: 86
- Joined: 05/11/2019, 20:02
- Status: Offline (Active 8 Months, 1 Week, 4 Days, 16 Hours, 54 Minutes ago)
- Topics: 30
- Reputation: 93
- Has thanked: 19 times
- Been thanked: 123 times
Lc0
LC0 v0.29.0 Development (blas)
https://ci.appveyor.com/project/LeelaChessZero/lc0/build/job/g7y9ypatpwofm6s9/artifacts
Multigather is now made the default (and also improved). Some search settings
have changed meaning, so if you have modified values please discard them.
Specifically, max-collision-events, max-collision-visits and
max-out-of-order-evals-factor have changed default values, but other options
also affect the search. Similarly, check that your gui is not caching the old
values.
Performance improvements for the cuda/cudnn backends.
Support for policy focus during training.
Larger/stronger 15b default net for all packages except android, blas and dnnl
that get a new 10b network.
The distributed binaries come with the mimalloc memory allocator for better
performance when a large tree has to be destroyed (e.g. after an unexpected
move).
The legacy time manager will use more time for the first move after a long
book line.
The --preload command line flag will initialize the backend and load the
network during startup.
A 'fen' command was added as a UCI extension to print the current position.
Experimental onednn backend for recent intel cpus and gpus.
https://ci.appveyor.com/project/LeelaChessZero/lc0/build/job/g7y9ypatpwofm6s9/artifacts
Multigather is now made the default (and also improved). Some search settings
have changed meaning, so if you have modified values please discard them.
Specifically, max-collision-events, max-collision-visits and
max-out-of-order-evals-factor have changed default values, but other options
also affect the search. Similarly, check that your gui is not caching the old
values.
Performance improvements for the cuda/cudnn backends.
Support for policy focus during training.
Larger/stronger 15b default net for all packages except android, blas and dnnl
that get a new 10b network.
The distributed binaries come with the mimalloc memory allocator for better
performance when a large tree has to be destroyed (e.g. after an unexpected
move).
The legacy time manager will use more time for the first move after a long
book line.
The --preload command line flag will initialize the backend and load the
network during startup.
A 'fen' command was added as a UCI extension to print the current position.
Experimental onednn backend for recent intel cpus and gpus.
-
- Global moderators
- Points: 6 305,00
- Forum Contributions
- Posts: 2145
- Joined: 01/11/2019, 14:27
- Status: Offline (Active 11 Hours, 36 Minutes ago)
- Medals: 2
- Topics: 351
- Reputation: 382
- Location: Biergarten
- Has thanked: 1917 times
- Been thanked: 4086 times
Lc0
v0.28.0-rc2
-The cuda backend option multi_stream is now off by default. You should
consider setting it to on if you have a recent gpu with a lot of vram.
-Updated default parameters.
-Newer and stronger nets are included in the release packages.
-Added support for onnx network files and runtime with the "onnx" backend.
-Several bug and stability fixes.
Download
-The cuda backend option multi_stream is now off by default. You should
consider setting it to on if you have a recent gpu with a lot of vram.
-Updated default parameters.
-Newer and stronger nets are included in the release packages.
-Added support for onnx network files and runtime with the "onnx" backend.
-Several bug and stability fixes.
Download
-
- Forum Contributions
- Points: 6 000,00
- Posts: 42
- Joined: 04/11/2019, 10:08
- Status: Offline (Active 1 Year, 1 Month, 3 Weeks, 3 Days, 16 Hours, 5 Minutes ago)
- Topics: 3
- Reputation: 24
- Has thanked: 46 times
- Been thanked: 39 times
Lc0
What is the meaning of a recent gpu with a lot of vram?
Need a advice from experts.
I use a rtx 2060 super 8gb.
Should i stay with default?
Need a advice from experts.
I use a rtx 2060 super 8gb.
Should i stay with default?
-
- Forum Contributions
- Points: 7 537,00
- Posts: 100
- Joined: 04/11/2019, 21:23
- Status: Offline (Active 1 Week, 2 Days, 17 Hours, 4 Minutes ago)
- Medals: 1
- Topics: 4
- Reputation: 19
- Has thanked: 42 times
- Been thanked: 64 times
Lc0
I think you may want to ask on the discord channel, there are some very helpful folks there who are very knowlegable.
-
- Forum Contributions
- Points: 40 305,00
- Posts: 1924
- Joined: 04/11/2019, 14:45
- Status: Offline (Active 3 Months, 3 Days, 10 Hours, 10 Minutes ago)
- Medals: 1
- Topics: 71
- Reputation: 2388
- Location: North-Italy
- Has thanked: 1185 times
- Been thanked: 2951 times
Lc0
I'm not 100% sure, but top video cards now have a processor faster than the normal cpu of a high-end personal computer, but the system of computing is really different from the CPU.
From https://stackoverflow.com/questions/6435428/why-are-gpus-more-powerful-than-cpus:
The Lc0 team has found a way to exploit this power. But the GPU computing is something related to neural networks, that's why Stockfish, which core is entirely based on arrays computing (like any Alpha-Beta engine), can't be transferred to such devices.
From https://stackoverflow.com/questions/6435428/why-are-gpus-more-powerful-than-cpus:
Other than password cracking, I know that top graphic cards are widely used for Bitcoin mining and alot of bilionaires by now made their computer farms to dig Bitcoins, that's why, as long with the coronavirus effects, top GeForces almost disappeared from the market (i.e. 3080) or their price reached the sky.GPUs are designed with one goal in mind: process graphics really fast. Since this is the only concern they have, there have been some specialized optimizations in place that allow for certain calculations to go a LOT faster than they would in a traditional processor.
In the case of password cracking (or the molecular dynamic "folding at home" project) what has happened is that programmers have found ways of leveraging these optimized processes to do things like crunch passwords at a faster rate.
Your standard CPU has to do a lot more different calculation and processing types that what graphics processors do, so they can't be optimized in a similar manner.
The Lc0 team has found a way to exploit this power. But the GPU computing is something related to neural networks, that's why Stockfish, which core is entirely based on arrays computing (like any Alpha-Beta engine), can't be transferred to such devices.