Skip to main content

Batches of jobs

We recommend to group jobs with the same sequence in a single batch. On the batch page of the web portal, you will be able to monitor all the jobs and execute bulk actions to download their results or cancel them.

Lifecycle of a batch

A new batch has the PENDING status and is added to the targeted backend's queue. When the batch is at the top of the queue and the backend is idle, it is popped from the queue and the status is updated to RUNNING. All the jobs of that batch will be executed on the backend in the order they were submitted. Each job is also created with status PENDING, updated to RUNNING when the execution starts and ends up with a termination status DONE or ERROR. Once all the jobs of the batch reach a termination status, the batch status is updated to DONE and the backend moves on to the next one in the queue.

Before a job starts running, you may cancel so that it is not executed on the backend. The job status will be updated to CANCELED and your project will be refunded for the credits deducted for that job. You can cancel the entire batch, which removes it from the queue if it has not started or otherwise cancels all its pending jobs. Batches and jobs can be canceled from the portal or via the SDK.

Open batches for variational algorithms

For variational algorithms, you cannot know in advance the variables for all the jobs to add to the batch. In that case, you should create an "open" batch for which you can keep adding new jobs while it is PENDING or RUNNING.

Typically, you can create an open batch with one or more jobs, then wait for these jobs to terminate, get their results and postprocess them to compute the variables for the next job. Then, add one or more new jobs to the batch - the backend is reserved for your batch until it is DONE so these new jobs will be executed immediately. Once you are done adding new jobs to your batch, "close" it using the dedicated method of the SDK or the action on the portal's batch page. This will let the backend know it can move on to the next batch.

Note: A batch will be interrupted with status TIMED_OUT if it remains open with no new jobs to run after a few minutes

Here is an example on how to create an open batch, add jobs to it and close it:

# Create an open batch with 1 job
batch = sdk.create_batch(
serialized_sequence,
[{"runs": 50, "variables": {"omega_max": 9.5}}],
open=True,
)

# Add some jobs to it and wait for the jobs to be terminated
batch.add_jobs(
[
{"runs": 50, "variables": {"omega_max": 10}},
{"runs": 50, "variables": {"omega_max": 10.5}},
],
wait=True,
)
# When you have sent all the jobs to your batch, don't forget to mark it as closed
batch.close()

Batching jobs with different registers

Jobs of a batch must share the same parametrized sequence which usually defines a register. However you can define their sequence using a mappable register - this lets you define a different register for each job.

Note: A mappable register is defined from a trap layout so the jobs must share the same layout. Otherwise you will need to create one batch for each layout.

from pulser.register.special_layouts import TriangularLatticeLayout
from pulser.devices import MockDevice

# let's create a layout shared by our jobs
layout = TriangularLatticeLayout(n_traps=100, spacing=5)
# we create a mappable register of 30 qubits out of this layout
map_register = layout.make_mappable_register(n_qubits=30)

# let's build a basic sequence for this mappable register and serialize it
seq = Sequence(map_register, MockDevice)
seq.declare_channel("rydberg", "rydberg_global")
seq.add(
Pulse.ConstantPulse(duration=100, amplitude=1, detuning=0, phase=0),
"rydberg",
)
serialized_sequence = seq.to_abstract_repr()

# now let's create a batch using this sequence
# the first job will use a register with atoms in the traps 1, 5 and 7
# the second job will use a register with atoms in the traps 2, 4, 8 and 26
batch = sdk.create_batch(
serialized_sequence,
[
{"runs": 100, "variables": {"qubits": {"q0": 1, "q1": 5, "q2": 7}}},
{"runs": 100, "variables": {"qubits": {"q0": 2, "q1": 4, "q2": 8, "q3": 26}}},
],
emulator="EMU_FREE"
)

Actions on batches and jobs

Get job results

Job results are available only for jobs marked as DONE.

To get the results (in JSON format) of an individual job, navigate to the Batches page (link on the top navigation). Then, inside the list of batches, click on the batch containing the job wanted. You will access the Batch details page, where all jobs assigned to the current batch are listed. Click on the job wanted in the list. You will then access its page, where you can get the results by clicking the Download results button on the top right corner.

Get all job results from a batch

You can get all jobs results (in JSON format) from a batch at once only when the batch is marked as done. To do so, you can navigate to the Batches page (link on the top navigation). Then, in the list, click on the batch containing the done job wanted to access the Job details page. There, you can get the results by clicking on the Download results button on the top right corner.

Get a list of jobs

It is possible to get all jobs or a selection of jobs with the get_jobs method. This method uses a pagination system that you have to handle. By default, a page returns 100 jobs, but it can be changed.

Here are few examples of how to use it:

from pasqal_cloud import JobFilters, JobStatus, PaginationParams

# Get the first 100 jobs, no filters applied
sdk.get_jobs()

# Get the first 40 jobs, no filters applied
sdk.get_jobs(pagination_params=PaginationParams(limit=40))

# Get the first 100 jobs from a given batch
sdk.get_jobs(filters=JobFilters(batch_id="batch_id"))

# Get the first 100 jobs in error from a specific project
sdk.get_jobs(filters=JobFilters(status=JobStatus.ERROR, project_id="project_id"))

# Get two jobs using two ids
sdk.get_jobs(filters=JobFilters(id=["job_id_1", "job_id_2"]))

# Get the first 20 cancelled jobs created in a given period from a specific list of users
sdk.get_jobs(limit=20,
filters=JobFilters(status=JobStatus.CANCELED, start_date=datetime(...), end_date=datetime(...),
user_id=["user_id_1", "user_id_2"]))

# Get the total number of jobs matching the filters
sdk.get_jobs(pagination_params=PaginationParams(offset=0)).total

# Get the first 300 jobs, no filters applied
jobs = []
jobs.extend(sdk.get_jobs(pagination_params=PaginationParams(offset=0)).results)
jobs.extend(sdk.get_jobs(pagination_params=PaginationParams(offset=100)).results)
jobs.extend(sdk.get_jobs(pagination_params=PaginationParams(offset=200)).results)

Retry a batch of jobs

It is possible to retry a selection of jobs from a CLOSED batch with the rebatch method.

from pasqal_cloud import RebatchFilters
from pasqal_cloud import JobStatus

# Retry all jobs from a given batch
sdk.rebatch(batch_id)

# Retry the first job of a batch
sdk.rebatch(batch_id, RebatchFilters(id=batch.ordered_jobs[0].id))

# Retry all jobs in error
sdk.rebatch(batch_id, RebatchFilters(status=JobStatus.ERROR))

# Retry cancelled jobs created in a given period
sdk.rebatch(batch_id, RebatchFilters(status=JobStatus.CANCELED, start_date=datetime(...), end_date=datetime(...)))

# Retry jobs that have a run number between 5 and 10
sdk.rebatch(batch_id, RebatchFilters(min_runs=5, max_runs=10))

Retry a job in an open batch

It is possible to retry a single job in a same open batch as an original job using batch.retry. The batch must be open in order for this method to work.

batch = sdk.create_batch(..., complete=False)

batch.retry(batch.ordered_jobs[0])

# Like for adding a job you can choose to wait for results.
batch.retry(batch.ordered_jobs[0], wait=True)

Get a List of supported device specifications

The SDK provides a method to retrieve the device specs currently defined on PASQAL's cloud platform. They define the physical constraints of our QPUs, and these constraints enforce some rules on the pulser sequence that can be run on QPUs (e.g., max number of atoms, available pulse channels, ...)

sdk.get_device_specs_dict()

The method returns a dict object mapping a device type to a serialized device specs. These specs can be used to instantiate a Device instance in the Pulser library.