研究目的
To maximize the number of bits decoded for an energy harvesting receiver with a time-switching architecture using online optimization, considering both infinite and finite horizon scenarios.
研究成果
The paper provides online policies for energy harvesting receivers that maximize decoded bits. For infinite horizon, an optimal policy achieves performance close to an upper bound. For finite horizon, two policies are presented: one with high complexity but near-optimal performance, and a computationally efficient one with good performance. Numerical results validate the approaches, showing they are promising for practical EH systems.
研究不足
The study assumes known statistics of energy arrivals and channel gains, which may not be available in practice. The computational complexity of the backward induction method is high for large N. The policies are designed for specific system models and may not generalize to other architectures or scenarios.
1:Experimental Design and Method Selection:
The study formulates Markov decision process (MDP) problems for online optimization. It uses theoretical models including MDP with continuous state and action spaces, and applies methods such as backward induction with space quantization for finite horizon and policy design based on upper bounds for infinite horizon.
2:Sample Selection and Data Sources:
The energy status sequence {E1, E2, ..., EN} is modeled as independent and identically distributed random variables with known cumulative distribution function, where Ei = eT + ěi, eT is constant energy from the transmitter, and ěi is random energy from ambient RF sources.
3:List of Experimental Equipment and Materials:
Not explicitly mentioned in the paper; the study is theoretical and does not specify physical equipment.
4:Experimental Procedures and Operational Workflow:
For infinite horizon, a policy is designed where the receiver harvests energy in initial blocks, then switches based on energy status. For finite horizon, policies are derived using backward induction and state space restriction. Numerical simulations are performed to evaluate policies.
5:Data Analysis Methods:
Performance is analyzed using expected rewards, upper bounds, and numerical results. Statistical methods like Dvoretzky-Kiefer-Wolfowitz inequality and Chernoff bound are used for proofs.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容