Climate TRACE was built to collect and share greenhouse gas emissions from anthropogenic (human) activities to facilitate climate action .
Robtex is used for various kinds of research of IP numbers, Domain names, etc.
Robtex uses various sources to gather public information about IP numbers, domain names, host names, Autonomous systems, routes etc. It then indexes the data in a big database and provide free access to the data.
We aim to make the fastest and most comprehensive free DNS lookup tool on the Internet.
Our database now contains billions of documents of internet data collected over more than a decade.
The scalable, open source
big data analytics platform
for networks and services.
Apache HBase is the Hadoop database, a distributed, scalable, big data store.
Use Apache HBase when you need random, realtime read/write access to your Big Data. This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware. Apache HBase is an open-source, distributed, versioned, non-relational database modeled after Google's Bigtable: A Distributed Storage System for Structured Data by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, Apache HBase provides Bigtable-like capabilities on top of Hadoop and HDFS.
D3.js is a JavaScript library for manipulating documents based on data. D3 helps you bring data to life using HTML, SVG and CSS. D3’s emphasis on web standards gives you the full capabilities of modern browsers without tying yourself to a proprietary framework, combining powerful visualization components and a data-driven approach to DOM manipulation.
Waarp provides a secure and efficient open source MFT solution.
Waarp Platform is a set of applications and tools specialized in managing and monitoring a high number of transfers in a secure and reliable way.
It relies on its own open protocol named R66, which has been designed to optimize file transfers, ensure the integrity of the data provide ways to integrate transfers in larger business transactions.
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using a simple programming model. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-avaiability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-availabile service on top of a cluster of computers, each of which may be prone to failures.