Posted on

get seed

Furthermore, if you are using CUDA tensors, and your CUDA version is 10.2 or greater, you should set the environment variable CUBLAS_WORKSPACE_CONFIG according to CUDA documentation: https://docs.nvidia.com/cuda/cublas/index.html#cublasApi_reproducibility

For custom operators, you might need to set python seed as well:

If you are using any other libraries that use random number generators, refer to the documentation for those libraries to see how to set consistent seeds for them.

CUDA convolution determinism¶

You can use torch.manual_seed() to seed the RNG for all devices (both CPU and CUDA):

Completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms. Furthermore, results may not be reproducible between CPU and GPU executions, even when using identical seeds.

If you or any of the libraries you are using rely on NumPy, you can seed the global NumPy RNG with:

torch.use_deterministic_algorithms() lets you configure PyTorch to use deterministic algorithms instead of nondeterministic ones where available, and to throw an error if an operation is known to be nondeterministic (and without a deterministic alternative).

In reference to the statement set.seed() , can I get the seed instead after running some code if I didn’t set it explicitly?

The function setSeed behaves generally like set.seed but any custom parameters passed to set.seed beyond the integer (kind, normal.kind, sample.kind) need to be listed in args.set as the ellipses . for setSeed are used to pass parameters to initSeed(. ) an internal function that enables setSeed and getSeed to work.

I also wrote a C-standard rand() function that passes in a min, max, n, method, and so on. This is how I generate an "integer" to feed the setSeed and store in memory. I use Sys.time() as min/max for a default seed generation ( min = -1*as.integer(Sys.time()) and max = as.integer(Sys.time()) ). sample is a bad idea as it has to create a vector in the range to compute a single value, but it is an method option of rand() which feeds initSeed . The default "high-low" I found to be slightly faster than "floor".

5 Answers 5

By default it stores the seed value to an element in the global list called "last" . This enables you to keep track of different memory seeds depending on the processes you are running. In the example below I access "last" specifically and "nsim" . a second seed stored in memory .

But I don’t know how that helps.

I’ve been re-running some code (interactively / at the console) containing a function that randomises some sample of the input data (the function is part of the kohonen package). After playing with it for some time to see the variety of output (it was an ‘unstable’ problem), I noticed one result that was pretty interesting. I of course had not used set.seed() , but wondered if I could get the seed after running the code to reproduce the result?

I have "hacked" together a solution with a seed memory which requires a global variable .random.seed.memory .