Dolphin’s PCI Express Software is aimed at performance critical applications. Advanced performance improving tools , such as the SuperSockets™ API, remove traditional network bottlenecks. By using Sockets, IP, and our advanced low level APIs, applications can take advantage of PCIe’ low latency PIO and DMA operations. The result is improved application performance with shared memory latencie of 0.54 us and sockets latencies under 1 us as well as throughput over 6500 MBps. Our software components include an optimized TCP/IP driver, SuperSockets, and our SISCI shared memory API. The SISCI API offers further advances such as replicated/ reflective memory and peer to peer transfers.
Dolphin’s reflective memory or multicast solution reinterprets traditional reflective memory offerings. Traditional Reflective Memory solutions, which have been on the market for years, implement a ring based topology. Dolphin’s reflective memory solution uses a modern switched architecture that delivers lower latency and higher throughput.
Dolphin’s Software Infrastructure Shared-Memory Cluster Interconnect (SISCI) API makes developing PCIe Network applications faster and easier. The SISCI API is a well-established API for shared memory environments. In PCIe multiprocessing architectures, the SISCI API enables PCIe based applications to use distributed resources such as CPUs, I/O, and memory. The resulting applications feature reduced system latency and increased data throughput.
PCIe Networks can replace local Ethernet networks with a high speed low latency network. SuperSockets™ is a unique implementation of the Berkeley Sockets API. With SuperSockets™, network applications transparently capitalize on the PCIe transport to achieve performance gains.
Dolphin PCIe hardware and the SuperSockets™ software support the most demanding sockets based applications with an ultra-low latency, high-bandwidth, low overhead, and highly available platform. New and existing Linux and Windows applications require no modification to be deployed on Dolphin’s high performance platform.
Leider gibt es für diesen Aussteller kein deutsches Firmenprofil.
Dolphin Interconnect Solutions is a leader in low latency, high performance interconnects for embedded, financial, and general purpose computing. Dolphin products are used to connect multiple computers and IO systems together to create high performance computing platforms for demanding applications. Application clusters benefit from the significant improvements in response time and transaction throughput delivered by Dolphin’s PCI Express software and hardware products.
Dolphin Interconnect is headquartered in Oslo, Norway. U.S. operations are located in New Hampshire. Dolphin maintains sales offices in Dallas, Los Angeles, San Diego, Boston, Oslo, Toulouse, and is represented internationally by a network of resellers, distributors and integrators. Since the early 1990’s, Dolphin has developed leading and innovative computer interconnect technology. The company has been instrumental in the development of many of the industry’s most important standards including PCI, SCI, StarFabric, and PCI Express. Dolphin’s current products are based on PCI Express.
Dolphin launched its PCI Express product line in 2007 specifically to address the dynamic and rapidly growing requirement for high speed data movement. PCI Express provides significant and sustainable advantages over alternatives in multi-computer and I/O expansion applications. Dolphin’s products are made up of a combination of Dolphin PCI Express software and hardware. The PCI Express software provides flexibility of integration into existing or new application environment. Dolphin’s software stack support applications ranging from reflective memory to sockets based application acceleration. Dolphin’s PCI Express hardware delivers a roadmap of performance from Gen 1 to Gen3 and beyond.
Dolphin supports embedded applications with standard hardware, but also supports porting to custom applications. PCI Express based systems combined with Dolphin software enable developers to meet latency and performance challenges.