Sep 292019 0 Responses

Review: Web Content Extractor’s Online/Cloud based Scraping Platfrom

Scraping data, extracting data from website, can be a very daunting task if the website has a lot of data on it. If you are scraping data from a website without using an automation software for web scraping, like Web Content Extractor, then you are likely to spend a lot of time in order to extract the data. But if you use an automation software, you are likely to finish the work earlier and with very little effort.

Certainly this tool is useful and helps you do tedious and boring tasks easily but there is another plus point to this tool is Accuracy. If you are not using an automated tool to extract data from a HTML website, you are most likely to have errors in case of data extraction. It can be due to several reasons but the most prominent one would be Human Error. But with the use of this tool, this error is minimized. Once you specify the type of data and set its fields, you will get all the similar data scraped without going through the hassle of doing it manually and thus minimizing the error. Read More…

May 082019 Tagged with , , , 0 Responses

Extracting Data From PDFs Using Tabula

Tabula Technology for PDF Parsing

Are you looking for a way to extract data from PDF documents? Well, you’re at the right place. Manually typing PDF data is often the first solution but fails most of the time for a variety of reasons. In this article we talk about PDF data extraction tool Tabula and how to use it. Tabula provides a visual PDF data extraction interface to select which data fields to be gathered from PDF tables conveniently and automatically. Tabula is a free open-source tool build for scraping data from PDF tables. Tabula is an offline software, available under MIT open-source license for Windows, Mac and Linux operating systems.

PDF Data Parsing

Data extraction from PDFs can be applied in many applications: Read More…

Mar 212019 Tagged with , , , 0 Responses

Talend Introduction & Tutorial to Merge files, having same schema

What is ETL?

Extract, Transform, Load (ETL) is the process of extracting data from various data sources, organizing it together, and storing it into a single database for later use like decision making and business insights. Before people used to perform ETL through manual coding in SQL or .NET, but today lots of ETL tools are available that simplify the process. ETL is generally used for data migration, data replication, operational processes, data transformation and data synchronization.

ETL Process


Extract is the first step in the ETL process and the most important step. Data is saved in various formats like in row text file, Excel or CSV files, RDMS database or in JSON or XML files. This process allows to read those different data sources and pass it to the next process which is Transform.


This second step transforms data into required format, it includes various operations on data such as Joining, Sorting, Filtering, Type Conversion, Lookups, Validating and other data operations and these steps make data prepared for the next step.


In this last step the processed data get loaded to final destination which can be raw file, can be saved in Excel or CSV or also can be loaded in to database system like MySQL, Access or PostgreSQL and many other available options.

ETL Tools and Software

There are many ETL tools available in market both commercial as well as open source like Informatica Power Center, IBM Infosphere Information Server, Oracle Data Integrator, Microsoft SQL Server Integrated Services(SSIS), Ab Initio, Sybase ETL and many more.

ETL has big role in web scraping process. Data scraped from Public websites or other sources are not always in well format or some time it’s messy, ETL tools like Talend and other tools helps to transform the data in required format, validate them, merge them and load it to database like MySQL, NoSQL, sqLite, Oracle and many others or storage target like Amazon S3, FTP, Azure, Dropbox and others. Read More…

Dec 242018 Tagged with , , , 0 Responses

Web Scraping using Content Grabber API

API stands for Application Programming Interface, which is a software intermediary that allows two applications to communicate with each other. The API defines the correct way for a web developer to manage content grabber agent via programming. It is like Program API which uses Remote Procedure Call(RPC) to access the component.

The Content Grabber programming interface (API) provides access to the Content Grabber run-time from your own web/desktop applications For example, If you want to access the result of the Content Grabber Agent in your web application and display it on the dashboard, you can do it easily using the Content Grabber API . The Content Grabber run-time can be distributed with your applications royalty free and does not require the Content Grabber application to be installed on the target computer. The Content Grabber run-time requires .NET version 4.5 or higher.

Content Grabber API Read More…

Sep 152018 Tagged with , , , , 0 Responses

Top 15 Automated Software Testing Tools

Software Testing is an integral part of software development life cycle. It helps in bug fixing and correction of code hence enhances software quality and company’s reputation. Software testing is a huge subject, but it can be broadly categorized into two areas: manual testing and automated testing. In manual testing (as the name suggests), test cases are executed manually by tester without any help of tools or software. But with automated testing, test cases are executed with the help of tools, scripts, and software. Automated Testing requires less human efforts and saves substantial amount of time. There are a number of tools out there — some free open source, some commercial/paid — that help you to test your system automatically. These tools are popularly known as Automated Testing Tools or Automated Testing Software.

web automation service

In this article, I have curated a list of top 15 most extensively used automated testing tools which are as listed below. Read More…

Jun 112018 Tagged with , , , 0 Responses

Google OpenRefine : Opensource and Free Tool to Work With Messy Data

open refineIn all the data intensive fields like retail, banking, telecom, insurance etc. managing data without any error is a challenging task. Data cleaning thus becomes vital in modifying or removing data in a database that may be duplicated, incomplete, incorrect or poorly formatted. Every data wrangler wants to cleanup and transform the data into other formats in a quick manner and practicing a lot to refine and analyse the raw data. This practice is widely referred as Data Wrangling, sometimes referred as data munging or data cleansing.

Data quality is an important aspect in the overall success of decision making. Inaccurate data leads to wrong assumptions and analysis. Consequently it leads to failure of the campaign or project. Redundant data can cause various problems like slow load ups, increases inconsistency and decreases efficiency. A good data cleaning tool solves these problems and cleans your database of redundant data, incorrect information and bad entries. Read More…

Apr 022018 Tagged with , , , 0 Responses

Content Grabber – Useful Custom Scripts

Content Grabber has powerful custom scripting using which you can customize Content Grabber behavior and develop power full web scraping agent that can crawl and scrape data from simple to very complex websites.

Below are few example of custom script using C# which shows database connection and dynamically run time Xpath modification when scraper running.  Read More…

Dec 282017 0 Responses

Phone Number Validation & Lookup JSON API

NumVerify helps web and application developers to validate national and international phone numbers. It provides simple RESTful API that one can use for phone number lookup and validation easily. Around 232 countries around the world is supported.
Passed number is checked in real-time, cross-verified with the latest database of international phone and mobile numbers and output is return with as much information as possible. API returns JSON string containing information like geographical location, carrier details, phone formats, and phone type.

In short, NumVerify allows you to find details behind every phone number and helps you to identify local-friendly number formats, reduce undelivered messages, and protect from spam and fraud.

Read More…

Dec 092017 0 Responses

Puppeteer – Web Scraping using Headless Chrome Node API

Puppeteer is Headless Chrome browser developed by Google Team. A headless browser is a web browser without a graphical user interface(GUI) means that it has no visual components. Headless browsers enable you to control web page via programming without human intervention. In Programmer’s term, Puppeteer is a node library or API for Headless browsing as well as browser automation developed by Google Chrome team.
web scraping with chrome
Browser automation helps you to automate repetitive tasks and web application testing. For example, monitoring product pricing over period of time, form submission, automatically login to web app, perform some task and logout etc.

There are many libraries for browser automation and web scraping like PhantomJS, Selenium IDE etc. However Puppeteer runs faster and uses less memory. Puppeteer only works with Google Chrome browser.Puppeteer can be used for:

Read More…

Nov 272017 0 Responses

Data Scraping Studio Review

scraperData Scraping Studio is an integrated and scalable platform which has been built to power your data scraping project. Aim behind this software is to make an automatic and most advanced data extraction engine so the user can enjoy fast data collection experience.Data Scraping Studio is used to extract the data from web pages, ajax sheets, xml, json and many more. It provides many services to the users which has been described in below section. Key services provided by this  are  Expert setup, maintenance, better performance, unlimited users, priority execution and expert support.  For the ease of understanding, Data Scraping Studio provides Help Center including Documentation, Forum, Video tutorials and API documentation. Read More…

1 2 3 4