

I’m not knowledgeable in this area, but I wish there was a way to partition the model and stream the partitions over the input, allowing for some kind of serially processing of models that do exceed memory. Like if I could allocate 32gb of ram, and process a 500gb model but at (500/32) a 15x slower rate.





You’ll need to provide all the sites you visited immediately after each of the ones you searched. Your
originheader will give that info away freely. So if it’s in the query parameters of the URL, then you go to Facebook, it’s as easy as{k: v for k, v in (pair.split("=", 1) for pair in response.headers["origin"].split("?", 1)[-1].split("&"))}