Gentlemen (and women), start your inference engines. One of the world’s largest buyers of systems is entering evaluation mode for deep learning accelerators to speed services based on trained models.
1. Flex Logix’s nnMAX 1K inference tile delivers INT8 Winograd acceleration that improves accuracy while reducing the necessary computations. The InferX X1 chip includes multiple nnMax clusters. It ...
A Korean research team has unveiled a core technology which reduces the time and cost invested in developing artificial intelligence (AI) chips in small and medium-sized enterprises and startups.
Results that may be inaccessible to you are currently showing.
Hide inaccessible results