Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The plots and statistics generated for the data analysis are given in this data set.
Furthermore, this data set contains the models, feature sets, scaler, prediction results and visualizations for the machine learning experiments conducted.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Associated article is still in review, but we will link it to this database if it is accepted.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
ABSTRACT Meta-analysis is an adequate statistical technique to combine results from different studies, and its use has been growing in the medical field. Thus, not only knowing how to interpret meta-analysis, but also knowing how to perform one, is fundamental today. Therefore, the objective of this article is to present the basic concepts and serve as a guide for conducting a meta-analysis using R and RStudio software. For this, the reader has access to the basic commands in the R and RStudio software, necessary for conducting a meta-analysis. The advantage of R is that it is a free software. For a better understanding of the commands, two examples were presented in a practical way, in addition to revising some basic concepts of this statistical technique. It is assumed that the data necessary for the meta-analysis has already been collected, that is, the description of methodologies for systematic review is not a discussed subject. Finally, it is worth remembering that there are many other techniques used in meta-analyses that were not addressed in this work. However, with the two examples used, the article already enables the reader to proceed with good and robust meta-analyses. Level of Evidence V, Expert Opinion.
Statistical Package for the Social Sciences (SPSS) is a menu-based software package for analysis statistical data and create graphs to draw meaningful information and conclusion. SPSS is very useful software in psychology, sociology, psychiatry, and other behavioural sciences research and it is user friendly also.
The e-commerce reviews dataset is a vast collection of customer feedback from various online marketplaces, including Amazon, Taobao, Tmall, Suning, JD, and more. This dataset is an incredibly valuable resource that can help businesses understand customer behaviour, preferences, and product feedback. By analysing this dataset, companies can gain insights into their target audiences, identify trends, and make data-driven decisions to improve their products and services.
TagX, a technology and consulting company, is helping businesses all over the world to leverage the power of data to solve the various challenges. With expertise in data management, data warehousing, data integration, and data annotation, TagX can help businesses turn vast amounts of data into meaningful insights. We can work with companies to analyse e-commerce reviews and ratings data from various marketplaces and help them make data-driven decisions based on the insights gained.
TagX's data solutions can help businesses of all sizes, across various industries, make sense of complex datasets. For example, we can help businesses identify the products that are most popular among customers and analyse customer reviews to understand what features or aspects of the products they like or dislike. This information can then be used to improve products, enhance customer satisfaction, and drive sales.
TagX's capabilities can also help businesses forecast future trends and identify areas of growth. They can use data analysis to identify patterns and make predictions, which can be used to inform strategic decisions such as product development, marketing, and pricing.
In summary, the e-commerce reviews dataset is a valuable resource that can provide businesses with valuable insights into customer behaviour and product feedback. TagX's data solutions can help businesses analyse this data and turn it into meaningful insights that can be used to inform strategic decisions and improve products and services.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This resource contains a Jupyter Notebook that is used to introduce hydrologic data analysis and conservation laws. This resource is part of a HydroLearn Physical Hydrology learning module available at https://edx.hydrolearn.org/courses/course-v1:Utah_State_University+CEE6400+2019_Fall/about
In this activity, the student learns how to (1) calculate the residence time of water in land and rivers for the global hydrologic cycle; (2) quantify the relative and absolute uncertainties in components of the water balance; (3) navigate public websites and databases, extract key watershed attributes, and perform basic hydrologic data analysis for a watershed of interest; (4) assess, compare, and interpret hydrologic trends in the context of a specific watershed.
Please note that in problems 3-8, the user is asked to use an R package (i.e., dataRetrieval) and select a U.S. Geological Survey (USGS) streamflow gage to retrieve streamflow data and then apply the hydrological data analysis to the watershed of interest. We acknowledge that the material relies on USGS data that are only available within the U.S. If running for other watersheds of interest outside the U.S. or wishing to work with other datasets, the user must take some further steps and develop codes to prepare the streamflow dataset. Once a streamflow time series dataset is obtained for an international catchment of interest, the user would need to read that file into the workspace before working through subsequent analyses.
Analyzing a marketing campaign and predicting its success probability
https://s3-media0.fl.yelpcdn.com/assets/srv0/engineering_pages/bea5c1e92bf3/assets/vendor/yelp-dataset-agreement.pdfhttps://s3-media0.fl.yelpcdn.com/assets/srv0/engineering_pages/bea5c1e92bf3/assets/vendor/yelp-dataset-agreement.pdf
Dataset containing millions of reviews on Yelp. In addition it contains business data including location data, attributes, and categories.
Open Government Licence 3.0http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/
License information was derived automatically
This dataset contains basic gridded atmospheric and surface variables for the planet Mars over three martian years (a martian year is 1.88 terrestrial years), produced as a reference run in association with the Mars Analysis Correction Data Assimilation (MACDA) v1.0 re-analysis. Each file in the dataset spans 30 martian mean solar days (sols) during the science mapping phase of the National Aeronautics and Space Administration's (NASA) Mars Global Surveyor (MGS) spacecraft, between May 1999 and August 2004.
This dataset is a reference run produced by re-analysis of Thermal Emission Spectrometer (TES) retrievals of only total dust opacities, using the MACDA scheme in a Mars global circulation model (MGCM). This reference dataset, therefore, should be used in association with the full re-analysis of TES retrievals of nadir thermal profiles and total dust opacities - see linked dataset.
The MGCM used is the UK spectral version of the model developed by the Laboratoire de Météorologie Dynamique in Paris, France.
MACDA is a collaboration between the University of Oxford and The Open University in the UK.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Sensitivity analysis is a method to determine the effects of different parameter values and inputs has on simulation outputs. This process can be done before or after calibration (Ronald 2016). + Calibrate parameter no.4 population of 1500Statistical analysis of 20 simulations to test assumption 1. The folder include the python code that rearrange and analyse the statistical data. Wilcoxon ranked of sum analyse the consistency between 20 datasets.Reference:Ronald, N. A. (2012). Modelling the effects of social networks on activity and travel behaviour. Eindhoven: Technische Universiteit Eindhoven. https://doi.org/10.6100/IR735524
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Supplementary materials for the article: De Winter, J. C. F., Dodou, D., & Wieringa, P. A. (2009). Exploratory factor analysis with small sample sizes. Multivariate Behavioral Research, 44, 147–181.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Guanxi-based Network Analysis Dataset is a unique dataset created from the biographical information of the roughly 1,700 Central Committee members from 1922 to 2011. It also lists birth/death years, birth province, educational background, military experience, Party career assignments, government jobs, visits abroad, and any special remarks. Each Central Committee member’s education level, military generation, provincial origin, kinship, and patron‐client information was coded for each Central Committee‐year. Individual attributes were also collected for each Central Committee member to indicate if he or she was a member of the Politburo or Standing Committee, a Long Marcher, previously purged and rehabilitated, a general officer, a technocrat, and/or a member of the Central Advisory Commission, as appropriate.
Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
In the excel file entitled by Method for data collection, the original information for the sectoral level data of material flows can be obtained following the statistical data sources presented in the template of data collection. The definitions of nine sectors (Internal Environment, Agriculture, Mining, Energy Conversion, Manufacturing, Recycling, Household, Construction, and Transportation) are described in Scopes of Sectors (Table S1). The templated spreadsheet (Table S2) was built to instruct the basic material flow data collection from statistical sources, and the last column in empty can help users to input their material flow data for their study area. Inventory of Conversion Factors (Table S3) lists all the conversion factors used in the sectoral material accounting at city level. The excel file of Data for the 16 Shandong cities in 2017 demonstrates the process of material flow accounting. First, raw data were acquired according to the data sources specified in the template. The blanks of some terms indicate the data cannot be accessed by statistical sources but need to be estimated based on the statistical data. In data preparation, some original data were simply calculated or repeated to further unify the physical units of original data by conversion factors. For example, as the original statistic for the number of vehicles are recorded by a specific year, not a variation between two years, so the simply calculations were done. Because the specific materials, like steel, aluminum are estimated by the added roads, pipelines and heat devices, some repetitive numbers appear in different terms for further estimations. Then, according to the conversion factors, the data with inconsistent units were converted into physical unit ton and sectoral material flows are presented in the material flow accounting. Furthermore, the total amounts of material input, material recycling, and waste disposal can be used to evaluate the CE indicators of the cities. In addition, following the pathways (e.g., f35 is the material flow from sector 3 mining to sector 5 manufacturing), the material flow dataset can be applied to construct the direct flow matrix, which is basic for the calculation of the integral flows in the ecological network.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ArcGIS has many analysis and geoprocessing tools that can help you solve real-world problems with your data. In some cases, you are able to run individual tools to complete an analysis. But sometimes you may require a more comprehensive way to create, share, and document your analysis workflow.In these situations, you can use a built-in application called ModelBuilder to create a workflow that you can reuse, modify, save, and share with others.In this course, you will learn the basics of working with ModelBuilder and creating models. Models contain many different elements, many of which you will learn about. You will also learn how to work with models that others create and share with you. Sharing models is one of the major advantages of working with ModelBuilder and models in general. You will learn how to prepare a model for sharing by setting various model parameters.After completing this course, you will be able to:Identify model elements and states.Describe a prebuilt model's processes and outputs.Create and document models for site selection and network analysis.Define model parameters and prepare a model for sharing.
National Center for Veterans Analysis and Statistics (NCVAS) Web Site. The web site contains a collection of statistics, data, and reports about Veterans and the utilization of VA benefits and services.
A course on data analysis, in particular regression analysis. For secondary analysis survey is to be used called "De verg(r)uisde universiteit" which is also stored completely at Steinmetz Archive under number P0580. For further information, see studynumber P0580.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A diverse selection of 1000 empirical time series, along with results of an hctsa feature extraction, using v1.06 of hctsa and Matlab 2019b, computed on a server at The University of Sydney.
No description is available. Visit https://dataone.org/datasets/a349ba4d36c94075515cffb20796c71d for complete metadata about this dataset.
City of Providence Employees' Retirement System - Performance Analysis
Data on teaching behaviour / classroom climate / attitude to trying new methods of teaching / field experiment with pre- and post-measurements and control groups around a training course in interaction analysis in order to increase flexibility in teaching / observation: of teachers by means of the verbal interaction category system ( VICS ) / measurement of classroom climate by means of Minnesota attitude inventory ( MPAI ) / attitude to new methods of teaching instrument developed by Jansen and de Kuyper, 1973.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The plots and statistics generated for the data analysis are given in this data set.
Furthermore, this data set contains the models, feature sets, scaler, prediction results and visualizations for the machine learning experiments conducted.