Find appropriate AC and/or AF cutoff for LD data , such that the code does not encounter out-of-memory issues.
Wenhan mentioned errors at 10 million variants per run (so per genetic ancestry group) but prior gnomAD v2 code mentioned the same at 30 million variants , both on standard workers ? where will I encounter this when running ?
And for cutoffs, we are running on both NFE and AFR. Is there the same cutoff that would get appropriate numbers (Wenhan ballparked 7-9million) for both ?
Relevant to #1656