pub struct ShardManager {
pub runners: Arc<Mutex<HashMap<ShardId, ShardRunnerInfo>>>,
/* private fields */
}
Expand description
A manager for handling the status of shards by starting them, restarting them, and stopping them when required.
Note: The Client
internally uses a shard manager. If you are using a Client, then you
do not need to make one of these.
§Examples
Initialize a shard manager with a framework responsible for shards 0 through 2, of 5 total shards:
use std::env;
use std::sync::{Arc, OnceLock};
use serenity::client::{EventHandler, RawEventHandler};
use serenity::framework::{Framework, StandardFramework};
use serenity::gateway::{ShardManager, ShardManagerOptions};
use serenity::http::Http;
use serenity::model::gateway::GatewayIntents;
use serenity::prelude::*;
use tokio::sync::{Mutex, RwLock};
struct Handler;
impl EventHandler for Handler {}
impl RawEventHandler for Handler {}
let ws_url = Arc::new(Mutex::new(http.get_gateway().await?.url));
let data = Arc::new(RwLock::new(TypeMap::new()));
let event_handler = Arc::new(Handler) as Arc<dyn EventHandler>;
let framework = Arc::new(StandardFramework::new()) as Arc<dyn Framework + 'static>;
ShardManager::new(ShardManagerOptions {
data,
event_handlers: vec![event_handler],
raw_event_handlers: vec![],
framework: Arc::new(OnceLock::from(framework)),
// the shard index to start initiating from
shard_index: 0,
// the number of shards to initiate (this initiates 0, 1, and 2)
shard_init: 3,
// the total number of shards in use
shard_total: 5,
ws_url,
intents: GatewayIntents::non_privileged(),
presence: None,
});
Fields§
§runners: Arc<Mutex<HashMap<ShardId, ShardRunnerInfo>>>
The shard runners currently managed.
Note: It is highly unrecommended to mutate this yourself unless you need to. Instead prefer to use methods on this struct that are provided where possible.
Implementations§
Source§impl ShardManager
impl ShardManager
Sourcepub fn new(
opt: ShardManagerOptions,
) -> (Arc<Self>, Receiver<Result<(), GatewayError>>)
pub fn new( opt: ShardManagerOptions, ) -> (Arc<Self>, Receiver<Result<(), GatewayError>>)
Creates a new shard manager, returning both the manager and a monitor for usage in a separate thread.
Sourcepub async fn has(&self, shard_id: ShardId) -> bool
pub async fn has(&self, shard_id: ShardId) -> bool
Returns whether the shard manager contains either an active instance of a shard runner responsible for the given ID.
If a shard has been queued but has not yet been initiated, then this will return false
.
Sourcepub fn initialize(&self) -> Result<()>
pub fn initialize(&self) -> Result<()>
Initializes all shards that the manager is responsible for.
This will communicate shard boots with the ShardQueuer
so that they are properly
queued.
Sourcepub async fn set_shards(&self, index: u32, init: u32, total: u32)
pub async fn set_shards(&self, index: u32, init: u32, total: u32)
Sets the new sharding information for the manager.
This will shutdown all existing shards.
This will not instantiate the new shards.
Sourcepub async fn restart(&self, shard_id: ShardId)
pub async fn restart(&self, shard_id: ShardId)
Restarts a shard runner.
This sends a shutdown signal to a shard’s associated ShardRunner
, and then queues a
initialization of a shard runner for the same shard via the ShardQueuer
.
§Examples
Restarting a shard by ID:
use serenity::model::id::ShardId;
use serenity::prelude::*;
// restart shard ID 7
client.shard_manager.restart(ShardId(7)).await;
Sourcepub async fn shards_instantiated(&self) -> Vec<ShardId>
pub async fn shards_instantiated(&self) -> Vec<ShardId>
Returns the ShardId
s of the shards that have been instantiated and currently have a
valid ShardRunner
.
Sourcepub async fn shutdown(&self, shard_id: ShardId, code: u16)
pub async fn shutdown(&self, shard_id: ShardId, code: u16)
Attempts to shut down the shard runner by Id.
Returns a boolean indicating whether a shard runner was present. This is not necessary an indicator of whether the shard runner was successfully shut down.
Note: If the receiving end of an mpsc channel - owned by the shard runner - no longer exists, then the shard runner will not know it should shut down. This should never happen. It may already be stopped.
Sourcepub async fn shutdown_all(&self)
pub async fn shutdown_all(&self)
Sends a shutdown message for all shards that the manager is responsible for that are still known to be running.
If you only need to shutdown a select number of shards, prefer looping over the
Self::shutdown
method.
Sourcepub fn intents(&self) -> GatewayIntents
pub fn intents(&self) -> GatewayIntents
Returns the gateway intents used for this gateway connection.
pub async fn return_with_value(&self, ret: Result<(), GatewayError>)
pub fn shutdown_finished(&self, id: ShardId)
pub async fn restart_shard(&self, id: ShardId)
pub async fn update_shard_latency_and_stage( &self, id: ShardId, latency: Option<Duration>, stage: ConnectionStage, )
Trait Implementations§
Source§impl Debug for ShardManager
impl Debug for ShardManager
Source§impl Drop for ShardManager
impl Drop for ShardManager
Source§fn drop(&mut self)
fn drop(&mut self)
A custom drop implementation to clean up after the manager.
This shuts down all active ShardRunner
s and attempts to tell the ShardQueuer
to
shutdown.