A machine-learning algorithm demonstrated the capability to process data that exceeds a computer’s available memory by identifying a massive data set’s key features and dividing them into manageable batches that don’t choke computer hardware. The algorithm set a world record for factorizing huge data sets during a test run on the world’s fifth-fastest supercomputer. Equally efficient on laptops and supercomputers, the highly scalable algorithm solves hardware bottlenecks that prevent processing information from data-rich applications in cancer research, satellite imagery, social media networks, national security science and earthquake research, to name just a few.
This article has been indexed from Hacking News — ScienceDaily
Read the original article: