AdMap: A Framework for Advertising using MapReduce Pipeline

Institute of Advanced Engineering and Science

Abhay Chaudhary, K R Batwada Batwada, Namita Mittal, Emmanuel S. Pilli,

Computer Science and Information Technologies, Vol 3, No 1: March 2022

Abstract

There is a vast collection of data for consumers due to tremendous development in digital marketing. Regular notification data allows different choices on ads and advertisement, as it applies to the operators. It involves a consumer and service data upgrade which is essential. For their ads or for consumers to validate nearby services which already are upgraded to the dataset systems, consumers are more concerned with the amount of data. Hence there is a void formed between the producer and the client. To fill that void, there is the need for a framework which can facilitate all the needs for query updating of the data. There has been work on MapReduce Informal Risk Allocation Review and Secondary Uncertainty System as well as Advertisement and selling Big Data Management services system. When data and user ads are increased in significant numbers, this leads to an improvement in service time, a significant advertisement network. MapReduce is a practical programming model for large scale text and data analysis. The conventional MapReduce seems to have a drawback that the whole source sample size should be mounted further into the database even before evaluation might occur. Sizeable latency can be implemented when the data collection is immense. The present systems have some shortcomings, such as the construction of an application is made more difficult by a vast number of information that each time lead to decision tree-based approach. The decision tree includes several layers so that it can be dynamic, overlapping, and the estimation complexity of the decision tree can be increased with more multiple classes. Grouping the same matched data in a different node or device by clustering with a large amount of information takes time and often raises costs. A systematic solution to the automated incorporation of data into an HDFS warehouse (Hadoop File System) includes a data hub server, a generic data charging mechanism and a metadata model that together tackle the reliability of data loading, data source heterogeneity and evolution of the data warehouse design. In our model framework, the database would be able to govern the data processing schema. In the future, as a variety of data is archived, the datalake will play a critical role in managing that data. To order to carry out a planned loading function, the setup files immense catalogue move the datahub server together to attach the miscellaneous details dynamically to its schemas.

Map Reduce; Advertising; HDFS; Data Warehouse; Data Lake; Advertising and Publishing

Publisher: Institute of Advanced Engineering and Science

Publish Date: 2022-03-01

DOI: 10.11591/csit.v3i1.p%p

Publish Year: 2022

ipmuGoDigital Library

Copyright © 2021 IpmuGo Digital Library.

All Right Reserved

Support

Help Center

Privacy Policy

Terms of Service