All of the compiler technology being built by PolyMage Labs is based on the MLIR infrastructure. It encompasses abstractions used by high-level programming models like TensorFlow through mid-level forms involving the polyhedral abstraction, loop nests, and multi-dimensional arrays, to low-level specialized accelerator instructions. If you are a hardware vendor building specialized chips for Machine Learning and Artificial intelligence and are interested in exploring our products or services, please do reach out to us. Alternatively, if you are building new programming models or languages that require high-performance compiler technology, we may be able to help.
PolyMage Labs' engineers actively contribute to open-source ML/AI compiler projects, notably MLIR and TensorFlow/MLIR, actively upstreaming code as well as participating in their online design discussions and fora.