Yahoo Web Search

  1. See Why Champions Choose Rawlings. Free Shipping On Orders Over $35 Plus Free Returns. We've Got The Gear You Need To Play Your Best. Shop Now At The Official Rawlings Store.

    • Buckets of Balls

      Buckets Full of Rawlings® Baseballs

      Official Baseball of The Pros

    • Game Baseballs

      Shop The Balls The Pros Use

      The Official Baseball of the Majors

Search results

  1. www.katrinrenz.deKatrin Renz

    Katrin Renz. katrin.renz@uni-tuebingen.de. Hi! I'm a PhD student in the Autonomous Vision Group (AVG) as part of the International Max Planck Research School for Intelligent Systems (IMPRS-IS), advised by Andreas Geiger.

    • katrin renz baseball1
    • katrin renz baseball2
    • katrin renz baseball3
    • katrin renz baseball4
    • katrin renz baseball5
  2. PlanT is based on imitation learning with a compact object-level input representation. On the Longest6 benchmark for CARLA, PlanT outperforms all prior methods (matching the driving score of the expert) while being 5.3× faster than equivalent pixel-based planning baselines during inference.

  3. Articles 1–12. ‪University of Tübingen, Tübingen AI Center & IMPRS-IS‬ - ‪‪Cited by 493‬‬ - ‪Computer Vision‬ - ‪Machine Learning‬.

    • Overview
    • Project Page | Paper | Supplementary
    • Content
    • Setup
    • Data and models
    • Data generation
    • Training
    • Evaluation
    • Explainability
    • Perception PlanT

    News:

    19.01.2023: We released the code to generate the attention visualization.

    02.12.2022: We released the perception checkpoint and the code for the SENSORS and MAP track agent. Conda environment needs to be updated. Checkpoints of the perception are in the checkpoint folder. Please download again.

    11.11.2022: We made some changes in the agent files to ensure compatibility with our perception PlanT. We therefore uploaded new checkpoint files. The old one does not work anymore with the current code.

    This repository provides code for the following paper:

    •Katrin Renz, Kashyap Chitta, Otniel-Bogdan Mercea, A. Sophia Koepke, Zeynep Akata and Andreas Geiger, PlanT: Explainable Planning Transformers via Object-Level Representations, CoRL 2022.

    First, you have to install carla and the conda environment.

    You can download our pretrained PlanT models by executing:

    To download our 3x dataset run:

    You can download our dataset or generate your own dataset. In order to generate your own one you first need to start a Carla server:

    To generate the data for the route specified in carla_agent_files/config/eval/train.yaml you can run

    If you want to also save the sensor data that we used to train the perception module you can add the flag experiments.SAVE_SENSORS=1.

    To generate the whole dataset you can use the datagen.sh file.

    To run the PlanT training on the 3x dataset, run:

    To change any hyperparameters have a look at training/config/model/PlanT.yaml. For general training settings (e.g., #GPUs) check training/config/config.yaml.

    This evaluates the PlanT model on the specified benchmark (default: longest6). The config is specified in the folder carla_agent_files/config.

    Start a Carla server (see Data generation).

    When the server is running, start the evaluation with:

    You can find the results of the evaluation in a newly created evaluation folder inside the model folder. If you want to have a (very minimalistic) visualization you can set the viz flag (i.e., python leaderboard/scripts/run_evaluation.py user=$USER experiments=PlanTmedium3x eval=longest6 viz=1)

    The execution of the explainability agent contains two stages: (1) PlanT forwardpass (no execution of actions) to get attention weights. We filter the vehicles so that only the vehicles with the topk attention scores remain as input for the second step. (2) We execute either the expert or PlanT with the filtered input (the agent only sees topk vehicles instead of all).

    Start a Carla server (see Data generation).

    When the server is running, start the evaluation with:

    To obtain the attention visualization set experiments.topk=100000 and in addition add the flag save_explainability_viz=True. This saves a video per route in a viz_vid folder. The image resolution can be changed in carla_agent_files/explainability_agent.py.

    We release two PlanT agents suitable for the two CARLA Leaderboard tracks. For the SENSORS track we predict the route with our perception module. In the MAP track model we get the route information from the map. The code is taken from the TransFuser (PAMI 2022) repo and adapted for our usecase. The config is specified in the folder carla_agent_file...

  4. Katrin Renz received the master's degree from the University of Heilbronn, in 2021. She is currently working toward the PhD degree with the Autonomous Vision Group led by prof. Andreas Geiger, part of the Max Planck Institute for Intelligent Systems and University of Tübingen, Germany.

  5. 2 code implementations • 11 Apr 2024 • Marcel Hallgarten, Julian Zapata, Martin Stoll, Katrin Renz, Andreas Zell We assess existing state-of-the-art planners on our benchmark and show that neither rule-based nor learning-based planners can safely navigate the interPlan scenarios.

  6. Sep 10, 2022 · Website: https://www.katrinrenz.de/plant. Code: https://github.com/autonomousvision/plant. We propose PlanT, a state-of-the-art planner for self-driving based on object-level representations and a transformer architecture which can explain its decisions by identifying the most relevant object.

  1. People also search for