That is, according to an ACL user who attended the 2018 ACL Connections conference.
This ACL user, who I will refer to as Kimiko, told me that she had an interesting conversation with someone on the ACL development team.
Kimiko and the developer were discussing how ACL can only handle so many records before it chokes, even when using the fastest hard drives, and storing and running ACL and all data files locally (vs. storing data files on the LAN).
Kimiko said that once she gets around 20 million records, 25 columns across, ACL chugs too slowly. She tried over 100 million records (just once), and no dice.
The developer allegedly said that is due to ACL being text-based, instead of database-based, and that changing ACL to a database-based system would require re-architecting the entire platform.
In all fairness, you have to remember that ACL is decades old, and was born when files analyzed were small, hard disk space was expensive, and using text files to store data was easy and cheap.
I’ve always wondered why ACL won’t make their software multi-threaded, but even that won’t solve the overall problem: the main bottleneck in ACL is input/output, and there’s a limit to how fast you can push data in and out of text files.
Back to the developer….Kimiko said that she asked the developer whether he thought, given that files and getting larger and ACL isn’t getting faster (the improvements over the years have been minor in comparison), whether ACL would even be available on the desktop in 5 years.
The developer allegedly said that it is possible that ACL would only be available in the cloud in the future.
So much for running ACL on the train and other places with no wifi…
Perhaps that’s why ACL Robotics was rolled out–to get ACLers used to running their analytics in the cloud?
I’m not a hardware guy, so anyone who understands these issues better than Kimiko and me, please comment.
Also, anyone at ACL care to comment?
Other related posts: