We help fund managers generate more alpha by accelerating the training of deep neural networks. With hyper-efficient network architectures, training speeds can be increased by orders of magnitude without investing in additional hardware.
With our innovative approach of structuring training into multiple tasks with relative objectives, neural networks can be trained in a fraction of the time compared to traditional methods.
A ten-fold improvement is normal for simple networks. And for more complex networks, training speeds can be 100s of times faster.
Our unique approach makes the exploration of the deep neural network topology space highly efficient, unlocking an additional training time boost where the model structure in unknown beforehand.
Traditionally, to increase training speed, significant investment in hardware is required to either horizontally scale or to upgrade to the latest generation.
Now, with Hard Sums Technologies, we unlock a third approach with the implementation of an entirely novel structure for hyper-efficient training.
By accelerating the training, we improve the outcomes for our customers whilst reducing the carbon footprint of the operation.
Our design and training engine can be instructed to favour the identification and correction of biases during the model creation process, thus promoting fairness and interpretability where it matters.
Furthermore, our unique approach enables network architectures where layers can interface with non-differentiable objects (e.g. databases, memories, etc) to extend network capability. This enables our customers to utilise the data to its fullest potential and solve more complex challenges than are currently possible.
Because of prohibitively long training times, compromises may need to be made when developing neural network based investment strategies. Such compromises are now a thing of the past.
Develop and test many more potential investment strategies
Investigate a much wider range of model topologies
Quickly train and exploit new models before dataset obsolescence
Greater insight into why the network has converged to a specific model
Implement more complex models than currently feasible
Validate model performance under a wider range of risk scenarios
Retrain deployed models more often than is currently viable
More rapidly shift to a new trading strategy when called for
At our core, we elevate the success of AI based fund management by providing a revolutionary approach to the training of neural networks. By removing training time as a constraint in the development of new models, we are enabling more complex and better models to be deployed, boosting fund performance.
Hard Sums Technologies was born out of an ambition to go beyond the state of the art in AI, without having to continually reinvest in the latest hardware or cloud computing services.
Our focus is to train networks faster and smarter, rather than with brute force computing power. We have developed a highly innovative approach that increases training speeds by several orders of magnitude without the exponential increase in computing costs – and the more complex the problem (i.e. the deeper the neural network), the greater the improvement in training time compared to the status quo.