Research Paper
Political economy
Somayeh Nematollahi; Farshad Momeni; Alireza Garshasbi
Abstract
The present study aimed to examine the effect of regulatory levels on industrial value-added growth, comparing the results between developed and developing countries. For this purpose, a nonlinear equation was estimated using the panel GMM method and the delta method for the period 2000–2019. The ...
Read More
The present study aimed to examine the effect of regulatory levels on industrial value-added growth, comparing the results between developed and developing countries. For this purpose, a nonlinear equation was estimated using the panel GMM method and the delta method for the period 2000–2019. The estimation results for a sample of 99 countries showed an inverted U-shaped relationship between regulatory variables and industrial growth. For approximately 67% of the observations, the regulatory level had increased, and its effect on industrial growth was positive and significant. In addition, the growth-maximizing regulatory level in the sample was estimated at 2.61 (on a scale of 0–10). Moreover, the findings made clear that the relationship between regulation and industrial growth in developed countries was fundamentally different from that in developing countries. Specifically, while the estimates for developing countries were consistent with those for the full sample and exhibited an inverted U-shaped pattern, no growth-maximizing regulatory level was observed for developed countries, which can be attributed to institutional differences between the two groups.
Introduction
A main area of government intervention in the economy is the regulation aimed at promoting industrial development. The growing share of industry in the gross domestic product of industrializing countries highlights its special position in the world economy. Given the importance of government intervention and its capacity to play a regulatory role, a key question arises: to what extent have government regulatory institutions facilitated the process of industrialization, and to what extent have they created additional complexities for the industrial sector? Have the regulatory tools designed to support industrial policy contributed to industrial development, or have they instead led to industrial decline? Since government regulation has varying effects across countries, the present research is based on the hypothesis that an efficient level of regulation has a positive effect on industrial growth.
Materials and Methods
Relying on the data from multiple countries, the empirical model used in this study was to examine the relationship between regulation and industrial growth, as presented in Equation (1):
Growthi,t = 𝛼 + 𝛽1Regi,t + 𝛽2Reg2i,t + 𝛾Xi,t + 𝛼i + 𝜃t + 𝜀i,t (1)
In Equation (1), Growthi,t represents the annual industrial growth rate, which is also used in the Industrial Competitiveness Index and thus reflects aspects of industrial development quality. Reg denotes the level of regulation, X is a matrix of control variables, and 𝜃 is the fixed effects. Moreover, i represents countries and t refers to time periods (2000–2019). Regarding the variables, international data from reputable institutions was selected to enable meaningful cross-country comparisons. Concerning the indicators, industrial value-added growth was taken from the UNIDO database. The regulation variable was derived from the Fraser Institute’s Economic Freedom Index, specifically from its subcomponents on credit market, labor market, and business (commercial) regulation. The Fraser dataset is the most widely used and internationally recognized measure of regulatory conditions. The Fraser Institute provides scores for more than 150 countries on a scale of 0 to 10, where higher values indicate less regulation. In the current study, the scale was reversed so that higher scores would reflect higher levels of regulation. Concerning the control variables, the economic freedom variable from the Fraser Institute and industrial value added per capita from UNIDO were employed. To test the hypothesis, the indicators were introduced and the maximum level of regulation was estimated through the Delta method. The baseline specification of the model was then estimated using the GMM approach in a dynamic panel structure, covering 99 countries over the period 2000–2019.
Results and Discussion
According to the results, increasing the level of regulation generally had a positive effect on the growth rate of industrial value added. The findings indicated that regulation exerted a positive impact on industrial growth at lower levels, but its effect became negative at higher levels. In other words, maximum industrial growth occurred when regulation was relatively low, and increases in regulation up to the growth-maximizing point could have positive and significant effects on industrial value-added growth. However, once regulation exceeded the growth-maximizing level, its influence became negative. This highlights the importance of identifying the appropriate degree of regulation for guiding government intervention in the economy. The results also underscored the dual nature of regulation. Its effect on industrial value-added growth is not linear; rather, it follows a nonlinear pattern in which both positive and negative effects are possible—depending on the level of regulation. Thus, the expectation that government regulation will have a uniform effect across different contexts is unrealistic. Moreover, the findings reveal that the relationship between regulation and industrial growth differs between developing and developed countries. Thus, the expectation that government regulation will have a uniform effect across different contexts is unrealistic. Moreover, the findings revealed that the relationship between regulation and industrial growth differs between developing and developed countries. The inverted U-shaped relationship between regulation and industrial value-added growth suggests that discussions on regulatory reform require greater attention. A country with excessive regulation that seeks to enhance industrial growth should consider continuing regulatory reforms, but only to the extent that such reforms do not reduce growth relative to its current value.
Conclusion
The results showed that the relationship between regulation and industrial value-added growth in developed countries differs fundamentally from that in developing countries. Specifically, while the findings for developing countries aligned with the results obtained from the full sample, the results for developed countries diverged significantly. In developed economies, the overall effect of regulation on industrial value-added growth is negative. This contrast can be attributed to differences in government capacity and the quality of market institutions. In countries with weaker institutions and lower administrative capacity, introducing certain regulatory mechanisms may actually be beneficial and can substitute for other missing capacities—a point emphasized by Mancur Olson in the theory of the market-augmenting government. It is also important to note that an over-regulated economy that attempts aggressive deregulation may encounter resistance from those who benefit from existing regulations and wish to preserve current rents. Therefore, the impact of regulation on industrial growth depends not only on the level of regulation itself but also on the extent of rent-seeking activity. When rent-seeking is pervasive, the diversion of resources away from productive activities in response to proposed regulatory changes reduces the likelihood that such reforms will successfully promote industrial growth.
Research Paper
Financial Economics
Reza Taleblou; Mir Ali Kamali; Parisa Mohajeri
Abstract
The current study employed a comparative analytical framework to examine credit-default prediction. It relied on a comprehensive dataset of 56,965 loan contracts issued between 2019 and 2024 across the northern branches of Bank Melli Iran. Three modeling approaches were evaluated: traditional logistic ...
Read More
The current study employed a comparative analytical framework to examine credit-default prediction. It relied on a comprehensive dataset of 56,965 loan contracts issued between 2019 and 2024 across the northern branches of Bank Melli Iran. Three modeling approaches were evaluated: traditional logistic regression and two ensemble machine learning methods—random forest (RF) and extreme gradient boosting (XGBoost). The analysis incorporated 29 predictive features categorized into three conceptual groups: loan contract characteristics (e.g., principal amount, repayment tenure, collateral type), borrower attributes (e.g., age, occupational profile, credit history), and institutional factors (e.g., branch location, branch type). Data preprocessing included outlier removal, text categorization, and the extraction of variables such as age and grace period. The models were evaluated under both baseline and optimized (hyperparameter-tuned) settings. The results showed that the machine learning models substantially outperformed the conventional logistic regression model. XGBoost delivered the highest discriminatory power (ROC-AUC = 99.73%), followed closely by RF (99.68%), whereas logistic regression lagged significantly (75.34%). On average, the AUC difference between the machine learning models and logistic regression was approximately 0.243, and statistical tests with 95% confidence intervals confirmed the significance of this gap. Overall, the findings provided strong evidence for the superior reliability of machine learning approaches in forecasting loan default.
Introduction
Although traditional econometric models such as logistic regression have long served as the foundation of credit scoring systems, their reliance on linearity assumptions and error independence limits their ability to capture the complex, nonlinear patterns typical of financial data. These limitations are further compounded by sensitivity to multicollinearity and distributional assumptions that are frequently inconsistent with real-world conditions. The present research aimed to address these shortcomings by conducting a rigorous comparative analysis of predictive methodologies within Iran’s banking sector—a context in which machine learning applications remain relatively underutilized despite widespread global adoption of artificial intelligence in finance. Specifically, the study intended to compare the performance of two ensemble learning techniques ( i.e., random forest and extreme gradient boosting or XGBoost), with that of conventional logistic regression in forecasting loan defaults using extensive real-world data from Bank Melli Iran. The methodological advantages of machine learning approaches arise from their ability to model complex nonlinear relationships without requiring predefined functional forms, to automatically capture variable interactions through hierarchical partitioning, to maintain robustness in the presence of outliers and non-normal distributions, and to detect subtle patterns in high-dimensional data that escape parametric detection. By systematically evaluating these capabilities, the current study tried to offer empirical evidence to support financial institutions in adopting more advanced and reliable risk modeling frameworks.
Materials and Methods
The selection of predictive models in this study is informed by theoretical foundations, empirical literature, and practical forecasting capabilities. Three distinct modeling approaches—random forest (RF), extreme gradient boosting (XGBoost), and logistic regression (LR)—were employed to evaluate their effectiveness in predicting loan defaults. As a widely used ensemble learning algorithm, random forest (RF) builds multiple decision trees using bootstrap aggregating and random subsets of observations and features. Each tree is trained independently, and final predictions are obtained through majority voting (classification) or averaging (regression). This structure reduces overfitting and improves generalization compared to single decision trees. XGBoost is an advanced gradient boosting algorithm known for its efficiency and high predictive accuracy. XGBoost constructs trees sequentially, with each new tree reducing the residual errors of the ensemble through gradient descent optimization. Rooted in the logistic function and formalized in modern choice modeling, logistic regression improves on linear probability models by mapping predictions to the [0,1] interval via a sigmoid transformation. Although valued for its interpretability, conventional econometric models such as logistic regression suffer from a series limitations, including linearity assumptions, limited interaction detection, multicollinearity sensitivity, and distributional constraints. These methodological constraints potentially compromise predictive performance in complex, non-linear domains such as credit risk assessment.
Results and Discussion
The machine learning models were evaluated under two configurations: a baseline setting using default parameters and an optimized setting using hyperparameter tuning. Hyperparameters—settings external to the model that are not learned from data—strongly influence predictive accuracy, computational efficiency, and generalization. Suboptimal hyperparameter selection can lead to underfitting or overfitting, thereby compromising model performance. Common optimization strategies include grid search, random search, and Bayesian optimization. Empirical evidence shows that random search is often more efficient in high-dimensional spaces (Bergstra & Bengio, 2012). Although default parameters may yield reasonable baseline performance, they rarely yield optimal performance (Probst et al., 2019). Prior research suggests that systematic tuning can increase accuracy by 10–20% (Hutter et al., 2019) and improve generalization (Liao et al., 2018). In this study, hyperparameters were optimized to maximize the area under the curve (AUC), a standard practice in credit risk modeling (Feurer et al., 2015). This approach can reduce prediction errors and enhance model stability in ensemble methods. The empirical results revealed substantial performance improvements through hyperparameter optimization. For the RF model, accuracy increased from 96% in the untuned configuration to 99% after tuning, with a notable reduction in false negatives and improved precision, albeit with a slight decline in recall for the default class. The optimized XGBoost model—using 375 trees, a maximum depth of 12, and a learning rate of 0.03—achieved the lowest false-negative and false-positive rates, offering an optimal balance between learning capacity and predictive accuracy. In contrast, logistic regression showed limited discriminatory power, with a recall of 0.16 and a ROC-AUC of 0.75, indicating inherent limitations in capturing the complex patterns associated with default events.
Random Forest Model (With Hyperparameter Tuning)
Random Forest Model (Without Hyperparameter Tuning)
(XGBoost) Model (With Hyperparameter Tuning)
(XGBoost) Model (Without Hyperparameter Tuning)
Logistic Regression Model
Source: Research Results
Summary of Model Results
Model
State
ACCURACY
Precision (Bad)
Precision (Good)
Recall (Bad)
Recall (Good)
F1-Score (Bad)
F1-Score (Good)
ROC-AUC
RF
Unoptimized
97%
94/0
98/0
83/0
99/0
88/0
98/0
935/0
RF
Optimized
99%
97/0
99/0
94/0
99/0
95/0
99/0
9968/0
XGBOOST
Unoptimized
98%
96/0
99/0
85/0
99/0
90/0
99/0
9966/0
XGBOOST
Optimized
99%
97/0
99/0
88/0
99/0
92/0
99/0
9973/0
LR
-
96%
90/0
96/0
16/0
98/0
27/0
98/0
7534/0
Source: Research Results
Conclusion
The empirical results of this study demonstrates the superior predictive capabilities of machine learning methods—particularly XGBoost)—compared with conventional econometric approaches for estimating the probability of default (PD) in Bank Melli Iran’s loan portfolio. This performance gap primarily arises from machine learning algorithms’ ability to capture nonlinear relationships and latent structural patterns among default determinants—features that linear parametric models are unable to detect. Model precision was evaluated using several metrics, including confusion matrix analysis, total accuracy, and area under the ROC Curve (AUC). The findings indicated that machine learning models deliver substantially higher predictive precision and improved default detection rates. The optimized XGBoost model achieved outstanding performance (accuracy = 99%, AUC = 0.9973), far surpassing the logistic regression model’s ability to identify default cases (recall = 0.16). This distinct performance disparity strongly supports the research hypothesis regarding the comparative advantage of machine learning in PD estimation. Despite their superior predictive performance, the operational deployment of advanced machine learning techniques in financial institutions remains constrained by two key challenges: the computational complexity of hyperparameter optimization and the interpretability limitations inherent in black-box models. These limitations highlight the practical importance of developing hybrid frameworks that integrate the interpretive transparency of traditional methods with the predictive power of machine learning approaches. This research provided evidence of a paradigm shift in credit risk analytics, moving away from the long-standing reliance on conventional statistical models (such as logistic regression and linear probability models) toward machine learning methodologies. While prior studies using traditional techniques achieved moderate success, their limitations in handling imbalanced distributions and complex interaction effects have become increasingly apparent. The present findings align with international research trends and offer novel empirical evidence from Iran’s banking sector—demonstrating that well-tuned machine learning algorithms can achieve unprecedented levels of accuracy (99% accuracy compared with a 16% default identification rate for logistic regression).
Research Paper
international trading
Ali Mazyaki; Sina Ashouri; Javid Bahrami; Somayeh Shahhoseini
Abstract
The current study examined how strategic R&D investments by incumbent domestic firms influence entry deterrence against foreign competitors in an era of rising global market concentration. The objective was to analyze whether, and through what mechanisms, such investments operate as an endogenous ...
Read More
The current study examined how strategic R&D investments by incumbent domestic firms influence entry deterrence against foreign competitors in an era of rising global market concentration. The objective was to analyze whether, and through what mechanisms, such investments operate as an endogenous barrier to free trade. In this line, the study developed a theoretical framework integrating insights from international trade and industrial organization. The interaction between incumbent firms and potential foreign entrants was modeled as a Stackelberg entry-deterrence game, from which the corresponding equilibrium conditions were drawn. The analysis showed that even in the absence of direct innovation incentives, incumbents may find strategic R&D investment optimal. This occurs through a distortionary increase in wages for R&D personnel, which raises foreign competitors’ entry costs and reduces their expected profits. The analysis also identified a range of market sizes in which entry-deterring behavior is most pronounced. Specifically, moderate-sized markets are the sites where strategic R&D is most effectively used to deter entry, whereas in very small or very large markets the incentives for such behavior weaken. These results indicates that trade liberalization alone is insufficient to curb rising market concentration. The study underscored the importance of integrating trade and industrial policy when analyzing competitive dynamics and provided a theoretical foundation for future empirical research on the relationship between market size and firm-level R&D.
Introduction
While canonical trade theory predicts that liberalization promotes competition, empirical evidence since the late 1970s shows a persistent rise in market concentration. In this context, it is essential to analyze how strategic recruitment of R&D personnel can serve as an endogenous barrier to entry in international markets. The current study aimed to examine whether incumbent domestic firms can use R&D hiring strategically to deter foreign entry, even in the absence of direct productivity gains. It also went on to identify the market conditions under which such behavior is profit-maximizing, and evaluate policy mechanisms capable of mitigating its anti-competitive effects. An attempt was made to infer policy implications from the results. The analysis employed a Stackelberg entry-deterrence framework that integrates insights from international trade and industrial organization. In this setting, incumbents first decide on production levels and the scale of R&D hiring, while potential entrants subsequently determine whether to enter the market after observing these choices. The model treats R&D labor as a scarce, wage-sensitive input and incorporates fixed entry costs to capture market-access frictions. Analytical solutions and comparative statics delineate the conditions under which deterrence is rational.
Materials and Methods
The study developed a compact, theory-driven approach characteristic of industrial-organization research. It constructed a three-stage Stackelberg game in which an incumbent domestic firm moved first, followed by a potential foreign entrant. The model explicitly distinguished between final-goods production and R&D activities, treating the domestic supply of R&D labor as both scarce and endogenous to wage setting. Instead of estimating structural parameters, the analysis proceeded analytically: equilibrium strategies were derived, comparative-static conditions were characterized, and the parameter regions in which deterrence was feasible were identified. To make the theoretical insights practically interpretable, the research provided illustrative numerical examples and mapped the feasible parameter domains. Finally, it examined cross-country indicators (e.g., market-size proxies and measures related to fixed trade costs and access to R&D talent from OECD and other international sources), demonstrating that the non-linear patterns predicted by the model were observable in available data, while emphasizing that these checks were illustrative rather than causal tests.
Results and Discussion
The analysis yielded four principal findings. First, incumbents can indeed profit from strategically recruiting R&D personnel to raise rivals’ entry costs. This mechanism operates not through direct productivity gains but via a labor-market distortion: by increasing demand for scarce R&D talent, incumbents drive up wages, thereby raising the resource costs a foreign entrant would face when attempting to replicate or adapt products for the domestic market. Second, the feasibility of this strategic hiring is strongly non-linear in market size. In very small markets, profitable entry deterrence is infeasible because limited demand makes the cost of such a strategy economically unjustified. In very large markets, incumbents have little incentive to deter entry, as they already extract substantial rents without engaging in costly wage escalation. It was found that strategic R&D recruitment is most viable in medium-sized markets, where incumbents possess both sufficient market power and exposure to potential entrants. Third, the prevalence of deterrence critically depends on fixed entry costs and other trade-cost parameters. When fixed costs are low, deterrence collapses and market opening tends to produce competitive reallocation. Conversely, high or sticky fixed costs expand the parameter domain in which strategic hiring can sustain exclusion. Fourth, the analysis identified suggestive empirical patterns consistent with these theoretical predictions. Cross-country and cross-industry proxies revealed a non-monotonic relationship between market size and measures of R&D hiring intensity and entry impedance, while higher trade-friction indicators corresponded to conditions favorable to deterrence. Overall, these findings can be seen in their relevance to the broader trade–innovation literature. Although classic reallocation channels operated in some environments, the model demonstrated how innovation-related labor-market mechanisms can weaken or even reverse the pro-competitive effects of liberalization.
Conclusion
This study developed a theoretical framework to understand how the strategic recruitment of R&D personnel can serve as an endogenous barrier to foreign market entry, thereby explaining cases in which trade liberalization coincides with increased market concentration. The main policy implications are twofold. First, trade liberalization alone does not necessarily enhance competition if the strategic labor-market effects related to innovation are ignored. Second, policies that lower the effective fixed costs of entry or increase access to R&D talent (e.g., reducing regulatory barriers, improving talent mobility, or promoting open R&D collaborations) can mitigate the anti-competitive incentives for firms to use R&D hiring as an exclusionary tactic. In addition, more targeted competition policies are recommended. These include monitoring wage-driven exclusionary strategies, scrutinizing hiring practices that aim to limit talent rather than enhance productive capacity, and conditioning subsidies or incentives on demonstrable productivity gains. Several empirical extensions are also recommended, such as structurally estimating the model using firm-level data, identifying the causal effects of wage distortions, and evaluating policy experiments designed to reduce fixed trade costs or improve access to R&D personnel. Such efforts are critical for translating the model’s theoretical insights into practical policy solutions.
Research Paper
Monetary economy
Elmira Asle Roosta; Alireza Erfani; Abdolmohammad Kashian
Abstract
The exchange rate is recognized as a key economic indicator influenced by multiple factors. Some of these factors manifest as measurable economic variables, while others are reflected in political and financial news. A central, unresolved question is whether it is possible to develop a comprehensive ...
Read More
The exchange rate is recognized as a key economic indicator influenced by multiple factors. Some of these factors manifest as measurable economic variables, while others are reflected in political and financial news. A central, unresolved question is whether it is possible to develop a comprehensive and scalable model for exchange rate modeling and forecasting that accounts for all relevant variables and factors. Using a data fusion approach, the present study proposed a comprehensive deep learning–based model supporting multiple data types. To train the model, exchange rate–related news was collected from ten major national and international sources covering the period from 2014 to 2023 (1393–1402 in the Iranian calendar). The data was then combined with exchange rate figures and other economic indicators. To identify the best model, eight machine learning models, two statistical models, and one large language model were trained and evaluated under both regression and classification settings. To mitigate bias and random effects, the study applied time series–aware cross-validation along with repeated training and testing using different random initializations. The results demonstrated that the proposed approach, which directly incorporates all influential factors, significantly outperforms existing methods.
Introduction
Exchange rate fluctuations represent one of the most complex challenges in modern economic analysis, shaped by a dynamic interplay of macroeconomic fundamentals, policy decisions, and informational signals disseminated through the media. Traditional econometric approaches often fail to capture these multidimensional interactions, as they rely primarily on quantitative variables and lagged historical data. As a result, they tend to overlook the qualitative influence of news, market sentiment, and expectations that often precede measurable economic changes. Recent advances in artificial intelligence and machine learning have introduced powerful tools for integrating diverse forms of data—both numerical and textual—into unified predictive systems. The present research tried to propose a comprehensive and extensible model for forecasting exchange rates in Iran, combining structured economic indicators with unstructured news data through a data fusion approach.
Materials and Methods
This study employed a quantitative and applied methodology based on supervised machine learning techniques. The dataset spans the period from April 2014 to March 2023 (1393–1402 in the Iranian calendar). Daily free-market exchange rates were obtained from three verified sources: the National Exchange website, the Gold and Currency Information Network, and the Bonbast platform. Additionally, key macroeconomic indicators—including GDP growth, inflation rate, unemployment rate, trade balance, public debt, foreign reserves, and oil prices—were collected from official statistical repositories. Then the study went on to incorporate qualitative dimensions. In this respect, news articles related to exchange rate dynamics were gathered from ten major national and international media outlets, including Donya-e-Eqtesad, San’at-Madan-Tijarat, Asia Daily, ISNA, Khabaronline, Tabnak, BBC Persian, and Voice of America Persian. Each news item was labeled according to the contemporaneous changes in exchange rates. Data preprocessing involved normalization, outlier removal, and interpolation of missing values for numerical data. Textual data underwent cleaning, tokenization, and embedding using the ParsBERT model (Farahani et al., 2021), which was fine-tuned on domain-specific economic texts to improve contextual representation. Following preprocessing, approximately 388,354 fused samples were constructed. Eight machine learning models (Random Forest, XGBoost, LightGBM, CNN-LSTM, GRU, Bi-GRU, LSTM, and Bi-LSTM), two statistical models (ARIMA and Prophet), and one large language model (GPT-4) were trained and compared under both regression and classification settings. Model evaluation was conducted through time-series–aware cross-validation and repeated random initialization to minimize bias. Performance metrics included Mean Absolute Error (MAE), Mean Squared Error (MSE), Accuracy, and F1-score.
Results and Discussion
The results revealed that models integrating textual and numerical data substantially outperform those trained solely on numerical inputs. Specifically, the inclusion of news embeddings reduced forecasting error by more than 5% across most deep learning architectures. Among the evaluated models, the fine-tuned GPT-4 achieved the highest overall accuracy and the lowest error metrics in both regression and classification tasks. However, considering constraints on interpretability and data security, the Bi-directional Gated Recurrent Unit (Bi-GRU) model was identified as the optimal choice for practical implementation. The Bi-GRU model exhibited strong learning capability in capturing temporal dependencies and contextual relationships between macroeconomic variables and market sentiment. In classification mode, it achieved an F1-score of 0.84 and an accuracy rate of 0.86 when textual data were incorporated. In contrast, traditional statistical models such as ARIMA and Prophet showed limited capacity to reflect short-term market shocks influenced by real-time news.
The findings highlighted the importance of data fusion in financial forecasting. Textual news data provide early signals of market sentiment that often precede observable changes in economic variables. By integrating these heterogeneous data sources, the proposed model can offer a more dynamic and responsive forecasting framework, particularly suited to volatile markets such as Iran’s foreign exchange sector.
Conclusion
This study proposed a comprehensive machine learning–based model that successfully integrates textual and numerical data for exchange rate forecasting in Iran. The results confirmed that data fusion enhances predictive accuracy and robustness, outperforming both conventional econometric methods and single-modality deep learning models. Among the evaluated architectures, Bi-GRU offered the most practical balance between performance, interpretability, and computational efficiency. The findings underscored that incorporating news-driven sentiment and contextual information provides a timely advantage for policy formulation and risk management. Moreover, the modular structure of the proposed model allows for future extensions to other economic domains such as stock market analysis and inflation forecasting. Future studies are recommended to expand the dataset to include social media sentiment and to adopt explainable AI (XAI) techniques to improve interpretability and transparency.
Research Paper
Financial Economics
Azam Ahmadyan
Abstract
Today, the significance of the presence and emergence of fintechs—particularly those active in the financial sector—is widely recognized. This importance is reflected in the growing number of recent studies that examine fintech performance and its relationship with macroeconomic indicators ...
Read More
Today, the significance of the presence and emergence of fintechs—particularly those active in the financial sector—is widely recognized. This importance is reflected in the growing number of recent studies that examine fintech performance and its relationship with macroeconomic indicators at the international level. Global experiences indicate that fintech has substantially contributed to promoting economic growth and controlling inflation by expanding access to financial services. In Iran, more than 50 active fintech companies have entered various areas of the banking business model, suggesting that their presence can influence macroeconomic performance. Enhancing economic growth has been a primary concern for policymakers in recent years, which raises several key questions. First, how does the emergence and presence of fintech impact economic growth? Second, does the impact of fintech on economic growth vary under different conditions of inflation, liquidity, exchange rates, and stock price index? Using time series data from 1991 to 2023, the present study aimed to examine these questions. The ARDL method was employed to assess both the short-term and long-term effects of fintech on economic growth. Furthermore, the threshold regression method was applied to examine the impact of fintech under different levels of inflation, liquidity, exchange rates, and stock price index. The results of the autoregressive method with a structural break indicated a positive effect of fintech on economic growth. The threshold regression results further revealed that the effect of fintech on economic growth would vary across different macroeconomic variables.
Introduction
The development of the financial sector is a key driver of economic growth and GDP expansion. Rapid digitalization and technological advancements—such as digital currencies, artificial intelligence, mobile payments, and online trading—have profoundly transformed the financial system. These innovations, collectively known as financial technology (fintech), not only reshape financial operations but also create new opportunities for investors. According to the Financial Stability Board (2021), fintech refers to technological innovations in financial services that have the potential to alter business models, products, and market structures. Fintech enhances financial inclusion, lowers transaction and transfer costs, improves income flows, and supports investment and productivity. Through these channels, it influences consumption, savings, employment, and wealth creation, thereby fostering economic growth. Building on the endogenous growth model, the present study aimed to analyze the impact of fintechs on economic growth using data-driven approaches inspired by Narayan (2019) and Mshamba and Gani (2023). Previous empirical research indicates both positive long-term and negative short-term effects, with some evidence suggesting a U-shaped relationship—initially negative but turning positive as fintech matures. To examine this behavior, four hypotheses were explored, each focusing on the role of macroeconomic thresholds: inflation, exchange rates, liquidity, and stock price index.
Materials and Methods
The study relied on annual macroeconomic and fintech data for the period 1991–2023 (1370–1402 S.H.) to examine the dynamics of fintech’s impact on economic growth within an endogenous growth framework. The number of active fintech firms served as a proxy for fintech activity, in line with World Bank (2022) measurements, which highlight firm creation as the most accessible global indicator. Two complementary econometric approaches were employed. First, the autoregressive distributed lag (ARDL) model was used to analyze short-term and long-term dynamics. Second, the threshold regression (TR) model was applied to capture nonlinear effects under varying macroeconomic regimes. Specifically, four threshold variables (i.e., inflation, exchange rates, liquidity, and stock price index) were considered to examine whether the impact of fintech on growth varies across these conditions. Economic growth was the dependent variable, while sectoral credit ratios and investment rates were considered as control variables. By combining these models, the study provided a comprehensive assessment of fintech’s impact on economic growth and its interaction with macroeconomic fluctuations.
Results and Discussion
The results demonstrate that fintech plays an increasingly significant role in the modern economy, operating alongside traditional banks and influencing key macroeconomic variables—particularly economic growth. Empirical evidence confirms that the relationship between fintech development and growth is nonlinear; with effects that can be either positive or negative depending on prevailing macroeconomic conditions. Using the ARDL approach, the study confirmed a well-fitted model in which fintech exerts an overall positive effect on economic growth. However, analysis of macroeconomic thresholds—inflation, exchange rates, liquidity, and stock price index—revealed important nonlinear patterns. When inflation or exchange rates exceed critical thresholds, fintech’s impact on growth becomes negative, primarily due to rising investment, increased operational costs in technology infrastructure, and reduced financial accessibility. Similarly, excessive liquidity can heighten inflationary pressures, thus undermining fintech’s positive effects. In contrast, across varying levels of the stock price index, fintech consistently demonstrates a positive effect, reflecting its role in enhancing market efficiency and investor confidence.
Conclusion
According to the findings, fintech’s impact on economic growth is contingent upon macroeconomic stability. This study addressed a significant empirical gap by highlighting these threshold-dependent effects. It concludes that future research should further investigate the nonlinear (potentially U-shaped) relationship between fintech and economic growth, particularly through interaction terms that link fintech with macroeconomic variables.
Research Paper
Welfare, poverty and income distribution
Ali Azin; Seyed Hadi Arabi; Mohammad Hasan Maleki
Abstract
The current study aimed to identify and analyze the factors influencing subjective poverty in Iran. As an applied research, it employed a mixed methods research design to address the topic. The theoretical population consisted of experts in the field of subjective poverty, and a judgmental sampling method ...
Read More
The current study aimed to identify and analyze the factors influencing subjective poverty in Iran. As an applied research, it employed a mixed methods research design to address the topic. The theoretical population consisted of experts in the field of subjective poverty, and a judgmental sampling method was used. The data was collected through two questionnaires: an expert evaluation questionnaire and a prioritization questionnaire. In the first step, 36 factors were identified through a literature review and expert interviews. These factors were then screened using a questionnaire and the fuzzy Delphi method. Thirteen factors with a defuzzified value greater than 0.65 were selected for final prioritization. The selected factors were ranked using the measurement of alternatives and ranking according to the compromise solution (MARCOS) method and the prioritization questionnaires. According to the results of the Marcos method, the main factors influencing subjective poverty in Iran include personality traits, economic inequality, online social networks, social anomie, and unemployment. The findings suggest that personality traits such as self-esteem and hope for the future, along with factors like economic inequality and active engagement in online social networks, contribute significantly to subjective poverty in society.
Introduction
Poverty eradication is one of the most fundamental responsibilities of governments. Poverty is a multidimensional phenomenon that cannot be attributed solely to economic factors, nor are its consequences purely economic. It is thus essential to carry out a deeper examination into poverty. Poverty can generally be classified into three types: absolute poverty, relative poverty, and subjective poverty. Subjective poverty refers to one’s internal perception and personal evaluation of their living conditions. A person experiencing subjective poverty may not necessarily suffer from absolute or relative poverty; they may have an adequate income yet still feel impoverished. In this respect, this study aimed to identity and analyze the most significant factors affecting subjective poverty in Iran.
Materials and Methods
As an applied inquiry, the present study relied on a post-positivist approach and a mixed methods research design. The orientation was predominantly quantitative in nature. Moreover, a survey method, as well as a library research, was used to collect the data. The factors influencing subjective poverty were first identified through a review of existing literature. Since certain factors specific to the Iranian context were not addressed in foreign studies, additional factors were identified through structured interviews with experts. The first questionaire was an expert evaluation questionnaire designed to screen the factors influencing subjective poverty. For this purpose, the factor identification questionnaire was distributed to 15 experts in the field of poverty, selected through purposive sampling. These experts were faculty members from economics departments across Iran. In total, 36 factors were identified and categorized into four groups: economic, political, social, and individual factors. These factors were then screened using the Fuzzy Delphi method, resulting in the selection of 13 key factors by experts. In the final stage, the measurement of alternatives and ranking according to the compromise solution (MARCOS) method was employed to prioritize the identififed 13 factors.
Results and Discussion
According to the results, personality factors were found to play a decisive role in shaping subjective poverty, encompassing individual characteristics such as contentment, envy, greed, self-confidence, motivation, hope for the future, and the ability to cope with challenges. Economic inequality emerged as the second most influential factor, indicating that disparities in income and wealth within society can heighten feelings of despair and insecurity among individuals—even when their material living conditions are relatively favorable. Active engagement in online social networks was identified as the third most significant factor. This suggests that excessive use of social media can intensify perceptions of poverty due to the constant comparison of one’s own life with the appearance of other lives. Social anomie, ranked fourth, refers to a lack of identity and belonging in society. Individuals who feel disconnected from their community, country, political or social groups, or even their family, tend to experience subjective poverty more acutely than others. Finally, unemployment, while commonly associated with objective poverty due to insufficient income, contributes to subjective poverty through the loss of self-worth and social value. Beyond financial hardship, unemployment fosters feelings of exclusion and diminished social participation, leading individuals to perceive themselves as poorer compared to their employed counterparts.
Conclusion
The final remarks and practical recommendations are as follows. Regarding personality factors, it is recommended to design a national program for enhancing psycho-economic resilience in cooperation with the Ministry of Education and the Ministry of Economic Affairs and Finance. This initiative should be aimed at increasing the psychological resilience of students. Additionally, an educational intervention should be developed in collaboration with the Psychology and Counseling Organization and the Ministry of Education to strengthen self-esteem, resilience, self-confidence, and hope for the future among the current generation of students. Furthermore, the Statistical Centre of Iran should develop national indices to measure the rate of subjective poverty across the country, with separate analyses for each province to guide targeted reduction strategies. In terms of economic inequality, reforms are needed to create a more equitable distribution of wealth. Under the supervision of the Ministry of Economic Affairs and Finance, the tax system should be revised to focus on wealth, capital gains, and vacant housing units, thereby reducing wealth concentration and lowering the Gini coefficient. Wage levels should be standardized across all executive, governmental, and public agencies, particularly for organizations such as the Ministry of Petroleum and its subsidiaries, and the National Iranian Copper Industries Company—which currently pay wages several times higher than other institutions like the Ministry of Education. In addition, ensuring transparency and equity in the allocation of bank loans is essential to reduce disparities between the general public and privileged groups, thereby diminishing the sense of inequality. Concerning online social networks, it is recommended to develop a national media literacy framework led by the Ministry of Communications and Information Technology, the Ministry of Culture and Islamic Guidance, and the Ministry of Education, aimed at providing comprehensive media literacy education to students and the public. Supporting the development of local digital platforms can help reduce social comparison, while monitoring the psychological effects of social networks through the Ministry of Communications and Information Technology and the Psychology and Counseling Organization—with the publication of an annual report on digital mental health—can further mitigate the impact of social media on subjective poverty.
Research Paper
Financial Economics
Gholamhossein Golarzi; Mahnaz Khorasani
Abstract
This research examined the asymmetric effects of domestic economic policy uncertainty (DEPU) and global economic policy uncertainty (GEPU) on stock market index returns in Iran. The study focused on simultaneous analysis of economic policy uncertainty originating from both domestic and global sources ...
Read More
This research examined the asymmetric effects of domestic economic policy uncertainty (DEPU) and global economic policy uncertainty (GEPU) on stock market index returns in Iran. The study focused on simultaneous analysis of economic policy uncertainty originating from both domestic and global sources within a nonlinear framework, as well as the stock market’s asymmetric responses to these uncertainties. It used the nonlinear autoregressive distributed lag (NARDL) model, as it enables dynamic analysis and distinguishes the market’s reactions to positive and negative shocks across different time horizons. The dataset consisted of quarterly observations from 1997 to 2024. In addition to the uncertainty indices, the model incorporated several control variables, including the exchange rate, global oil prices, the consumer price index, money supply, real non-oil GDP, and stock market liquidity. Before estimating the model, the statistical properties of the data—such as nonlinearity, stationarity, the presence of long-term relationships, and response symmetry—were examined to ensure the suitability of the NARDL approach and the validity of the results. The results indicated that positive and negative shocks to DEPU have significant positive and negative effects, respectively, on stock market index returns in both the short and long run. Furthermore, GEPU shocks exert significant short-term effects with a time lag: positive shocks increase, while negative shocks decrease stock market index returns. In the long term, however, only positive GEPU shocks have a significant positive impact. The control variables also exhibited significant effects on stock market index returns.
Introduction
Economic policy uncertainty (EPU) is widely recognized as a critical factor influencing financial markets, including stock market returns. In today’s interconnected global economy, both domestic and global sources of policy uncertainty play a pivotal role in shaping investor behavior, economic decision-making, and overall market stability. Given Iran’s repeated exposure to policy shifts, economic sanctions, and geopolitical tensions, the country presents a unique setting for analyzing the impacts of policy uncertainty. Uncertainties—whether originating domestically or globally—can affect the stock market in diverse ways, varying in timing, direction, and intensity. This study aimed to investigate the asymmetric effects of domestic economic policy uncertainty (DEPU) and global economic policy uncertainty (GEPU) on stock market returns in Iran over the period 1997–2024. The primary objective was to examine how different forms of policy uncertainty influence the behavior of the Iranian stock market, while accounting for the non-linear and dynamic nature of these relationships. Market responses are not only asymmetric but are also shaped by the specific nature of each uncertainty source and the market’s sensitivity to these factors. Therefore, a nuanced analytical approach is required to capture the interactions between policy uncertainty and stock market performance. To this end, the present study employed the nonlinear autoregressive distributed lag (NARDL) model, an advanced econometric technique designed to capture asymmetric responses to positive and negative shocks in EPU. The findings can provide valuable insights into the role of EPU in shaping stock market returns in an emerging market such as Iran.
Materials and Methods
The present study used the nonlinear autoregressive distributed lag (NARDL) model, an appropriate method for analyzing nonlinear relationships in economic time series data. The NARDL model allows for the differentiation between positive and negative shocks, offering a more nuanced understanding of how various forms of uncertainty impact market behavior. Unlike traditional linear models, which assume symmetric effects of shocks, the NARDL approach enables the examination of distinct effects arising from positive and negative policy uncertainty shocks on stock market returns. This asymmetry is central to the study, as it reveals how market responses vary depending on the intensity and direction of uncertainty—an essential aspect for comprehensively assessing the effects of EPU on stock market performance. The analysis used quarterly time series data spanning the period 1997 to 2024. Key variables included DEPU and GEPU, Iran’s stock market returns, the exchange rate, global oil prices, the consumer price index (CPI), money supply, real non-oil GDP, and stock market liquidity. The DEPU and GEPU indices were constructed using content analysis of news reports, a widely accepted method for measuring EPU. Based on the theoretical framework and following the model proposed by Shin et al. (2014), the nonlinear long-term specification for Iran’s stock market index returns is presented in Equation (1):
(1)
The analysis aimed to simultaneously analyze the asymmetric short-term and long-term effects of the variables, so Equation (1) was reformulated as a NARDL model in the form of an error correction model (ECM), as presented in Equation (2):
+
(2)
Before estimating the NARDL model, several preliminary tests were conducted to ensure its statistical validity and the suitability of applying this nonlinear model. These tests included the BDS test for nonlinear dependence, the ADF and PP tests for stationarity, the Zivot–Andrews test for structural breaks, the cointegration test of Pesaran et al. (2001) for long-run relationships, and the Wald test for asymmetry. The results confirmed that the data satisfied the necessary assumptions for valid estimation and that the NARDL model was appropriate for the analysis. Following the estimation of the NARDL model, several diagnostic tests were performed to assess the reliability of the results. These included the ARCH and Breusch–Godfrey tests to detect heteroscedasticity and autocorrelation in the residuals, as well as the CUSUM and CUSUMQ to check for parameter stability. The outcomes of these diagnostic tests indicated that the model was correctly specified and that the findings were robust and reliable.
Results and Discussion
The results from the dynamic NARDL model (Table 1) showed that, in the short term, the lagged value of the stock market index has a significant positive effect on its returns. This indicates that past market growth can stimulate future growth. In the short term, DEPU displayed asymmetric effects: positive shocks have a significant positive impact, while negative shocks have a significant negative effect. GEPU also showed delayed and asymmetric effects. Positive GEPU shocks are statistically insignificant, but their first and second lags exert significant positive impacts on stock market returns. Negative GEPU shocks are also statistically insignificant, although their first lag has a significant negative effect. Regarding the control variables, the exchange rate had a positive and significant effect in the current period, while its lag was statistically insignificant. Moreover, global oil prices can exert positive effects in the second and third lags, but not in the current period or the first lag.
Table 1. Results of Dynamic NARDL Model Estimation
p-value
t-Statistic
Standard error
Coefficient
Variable
0.00
3.12
0.09
0.30
LSP(-1)
0.02
2.32
0.10
0.23
LDEPU_POS
0.72
0.34
0.93
0.32
LDEPU_POS)-1)
0.01
-2.51
0.08
-0.22
LDEPU_NEG
0.14
1.45
0.70
1.02
LGEPU_POS
0.03
2.15
0.52
1.12
LGEPU_POS(-1)
0.02
2.30
0.56
1.29
LGEPU_POS(-2)
0.10
1.64
0.39
0.64
LGEPU_NEG
0.00
-2.66
0.54
-1.45
LGEPU_NEG)-1)
0.04
2.07
0.44
0.93
LEX
0.14
-1.45
0.51
-0.75
LEX(-1)
0.25
-1.15
0.30
-0.35
LOIL
0.23
1.18
0.44
0.53
LOIL (-1)
0.03
2.17
0.45
0.98
LOIL (-2)
0.00
4.82
0.29
1.42
LOIL (-3)
0.00
2.70
0.47
1.27
LCPI
0.72
-0.34
2.11
-0.73
LMS
0.60
-0.52
2.46
-1.29
LMS)-1)
0.02
2.49
0.41
1.03
LMS)-2)
0.69
-0.40
4.55
-1.82
LMS)-3)
0.54
0.60
0.17
0.10
LRNOGDP
0.72
0.35
0.10
0.03
LLIQ
0.04
2.00
0.10
0.20
LLIQ (-1)
=0.88 F-statistic=7.6836 Prob (F-statistic) =0.0000
Source: Research Results
The consumer price index was found to have a significant positive impact on stock market returns. Money supply has a significant positive effect only in the second lag. Real non-oil GDP is not statistically significant in the short term, whereas market liquidity is significant only in its first lag. The overall model significance was confirmed by the F-statistic.
The long-term results from the NARDL model (Table 2) indicated that DEPU exhibits significant asymmetric effects: positive shocks lead to a positive long-term impact on stock market returns, whereas negative shocks produce a negative long-term effect. GEPU also showed asymmetric long-term behavior, with only positive shocks having a statistically significant influence. Among the control variables, the exchange rate and real non-oil GDP exert positive and significant long-term effects on stock market returns, while global oil prices and the consumer price index have negative long-term effects. The ECM coefficient confirmed that, following any shock, the market gradually adjusts back to its long-term equilibrium.
Table 2. Long-Term Relationships and the Error Correction Model (ECM) Results From NARDL
p-value
t-Statistic
Standard error
Coefficient
Variable
0.01
2.41
0.14
0.34
LDEPU_POS
0.00
2.92
0.09
-0.28
LDEPU_NEG
0.00
2.70
0.37
1.02
LGEPU_POS
0.11
-1.60
0.55
-0.89
LGEPU_NEG
0.00
2.87
0.24
0.69
LX
0.04
-2.04
0.45
-0.92
LOIL
0.01
-2.39
0.80
-1.93
LCPI
0.40
-0.83
0.75
-0.63
LMS
0.01
2.47
0.56
1.38
LRNOGDP
0.33
0.97
0.22
0.21
LLIQ
0.02
-2.2
0.11
-0.25
ECM
Source: Research Results
Conclusion
According to the findings, both DEPU and GEPU exert significant and asymmetric effects on stock market returns in Iran. The results highlighted the importance of recognizing non-linear relationships when examining the impact of EPU, particularly in emerging markets such as Iran. In light of the findings, it is recommended that economic policymakers proceed with greater caution when announcing policies and avoid issuing contradictory or ambiguous signals that could trigger short-term market overreactions and instability. Moreover, achieving long-term economic stability requires careful attention to fundamental factors such as exchange rates, oil prices, and economic growth. The results also indicated that, in the short term, speculative and emotional behaviors play a substantial role in market fluctuations. These behaviors can be curbed by improving regulatory frameworks, for example by facilitating short-selling, futures contracts, and options trading. In the long term, enhancing investors’ financial literacy and encouraging long-term investment strategies can help reduce volatility driven by short-term speculation.